Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

Overview

Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

If you use this code for your research, please cite our paper:

@Article{informatics8020040,
AUTHOR = {Altini, Nicola and De Giosa, Giuseppe and Fragasso, Nicola and Coscia, Claudia and Sibilano, Elena and Prencipe, Berardino and Hussain, Sardar Mehboob and Brunetti, Antonio and Buongiorno, Domenico and Guerriero, Andrea and Tatò, Ilaria Sabina and Brunetti, Gioacchino and Triggiani, Vito and Bevilacqua, Vitoantonio},
TITLE = {Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN},
JOURNAL = {Informatics},
VOLUME = {8},
YEAR = {2021},
NUMBER = {2},
ARTICLE-NUMBER = {40},
URL = {https://www.mdpi.com/2227-9709/8/2/40},
ISSN = {2227-9709},
DOI = {10.3390/informatics8020040}
}

Graphical Abstract: GraphicalAbstract


Materials

Dataset can be downloaded for free at this URL.


Configuration and pre-processing

Configure the file config/paths.py according to paths in your computer. Kindly note that base_dataset_dir should be an absolute path which points to the directory which contains the subfolders with images and labels for training and validating the algorithms present in this repository.

In order to perform pre-processing, execute the following scripts in the given order.

  1. Perform Train / Test split:
python run/task0/split.py --original-training-images=OTI --original-training-labels=OTL \ 
                          --original-validation-images=OVI --original-validation-labels=OVL

Where:

  • OTI is the path with the CT scan from the original dataset (downloaded from VerSe challenge, see link above);
  • OTL is the path with the labels related to the original dataset;
  • OVI is the path where test images will be put;
  • OVL is the path where test labels will be put.
  1. Cropping the splitted datasets:
python run/task0/crop_mask.py --original-training-images=OTI --original-training-labels=OTL \ 
                              --original-validation-images=OVI --original-validation-labels=OVL

Where the arguments are the same of 1).

  1. Pre-processing the cropped datasets (see also Payer et al. pre-processing):
python run/task0/pre_processing.py

Binary Segmentation

In order to perform this stage, 3D V-Net has been exploited. The followed workflow for binary segmentation is depicted in the following figure:

BinarySegmentationWorkflowImage

Training

To perform the training, the syntax is as follows:

python run/task1/train.py --epochs=NUM_EPOCHS --batch=BATCH_SIZE --workers=NUM_WORKERS \
                          --lr=LR --val_epochs=VAL_EPOCHS

Where:

  • NUM_EPOCHS is the number of epochs for which training the CNN (we often used 500 or 1000 in our experiments);
  • BATCH_SIZE is the batch size (we often used 8 in our experiments, in order to benefit from BatchNormalization layers);
  • NUM_WORKERS is the number of workers in the data loading (see PyTorch documentation);
  • LR is the learning rate,
  • VAL_EPOCHS is the number of epochs after which performing validation during training (a checkpoint model is also saved every VAL_EPOCHS epochs).

Inference

To perform the inference, the syntax is as follows:

python run/task1/segm_bin.py --path_image_in=PATH_IMAGE_IN --path_mask_out=PATH_MASK_OUT

Where:

  • PATH_IMAGE_IN is the folder with input images;
  • PATH_MASK_OUT is the folder where to write output masks.

An example inference result is depicted in the following figure:

BinarySegmentationInferenceImage

Metrics Calculation

In order to calculate binary segmentation metrics, the syntax is as follows:

python run/task1/metrics.py

Multiclass Segmentation

The followed workflow for multiclass segmentation is depicted in the following figure:

MultiClassSegmentationWorkflow

To perform the Multiclass Segmentation (can be performed only on binary segmentation output), the syntax is as follows:

python run/task2/multiclass_segmentation.py --input-path=INPUT_PATH \
                                            --gt-path=GT_PATH \
                                            --output-path=OUTPUT_PATH \
                                            --use-inertia-tensor=INERTIA \
                                            --no-metrics=NOM

Where:

  • INPUT_PATH is the path to the folder containing the binary spine masks obtained in previous steps (or binary spine ground truth).
  • GT_PATH is the path to the folder containing ground truth labels.
  • OUTPUT_PATH is the path where to write the output multiclass masks.
  • INERTIA can be either 0 or 1 depending or not if you want to include inertia tensor in the feature set for discrminating between bodies and arches (useful for scoliosis cases); default is 0.
  • NOM can be either 0 or 1 depending or not if you want to skip the calculation of multi-Hausdorff distance and multi-ASSD for the vertebrae labelling (it can be very computationally expensive with this implementation); default is 1.

Figures highlighting the different steps involved in this stage follows:

  • Morphology MultiClassSegmentationMorphology

  • Connected Components MultiClassSegmentationConnectedComponents

  • Clustering and arch/body coupling MultiClassSegmentationClustering

  • Centroids computation MultiClassSegmentationCentroids

  • Mesh reconstruction MultiClassSegmentationMesh


Visualization of the Predictions

The base_dataset_dir folder also contains the outputs folders:

  • predTr contains the binary segmentation predictions performed on training set;
  • predTs contains the binary segmentation predictions performed on testing set;
  • predMulticlass contains the multiclass segmentation predictions and the JSON files containing the centroids' positions.

Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

163 Dec 14, 2022
Pytorch implementation for RelTransformer

RelTransformer Our Architecture This is a Pytorch implementation for RelTransformer The implementation for Evaluating on VG200 can be found here Requi

Vision CAIR Research Group, KAUST 21 Nov 22, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
Python/Rust implementations and notes from Proofs Arguments and Zero Knowledge

What is this? This is where I'll be collecting resources related to the Study Group on Dr. Justin Thaler's Proofs Arguments And Zero Knowledge Book. T

Thor 66 Jan 04, 2023
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax

[NeurIPS 2021] Galerkin Transformer: linear attention without softmax Summary A non-numerical analyst oriented explanation on Toward Data Science abou

Shuhao Cao 159 Dec 20, 2022
Video Frame Interpolation with Transformer (CVPR2022)

VFIformer Official PyTorch implementation of our CVPR2022 paper Video Frame Interpolation with Transformer Dependencies python = 3.8 pytorch = 1.8.0

DV Lab 63 Dec 16, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks

NNProject - DeepMask This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks. Th

189 Nov 16, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
An energy estimator for eyeriss-like DNN hardware accelerator

Energy-Estimator-for-Eyeriss-like-Architecture- An energy estimator for eyeriss-like DNN hardware accelerator This is an energy estimator for eyeriss-

HEXIN BAO 2 Mar 26, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
Apply our monocular depth boosting to your own network!

MergeNet - Boost Your Own Depth Boost custom or edited monocular depth maps using MergeNet Input Original result After manual editing of base You can

Computational Photography Lab @ SFU 142 Dec 17, 2022
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
Torch implementation of SegNet and deconvolutional network

Torch implementation of SegNet and deconvolutional network

Fedor Chervinskii 5 Jul 17, 2020