Set of models for classifcation of 3D volumes

Overview

Classification models 3D Zoo - Keras and TF.Keras

This repository contains 3D variants of popular CNN models for classification like ResNets, DenseNets, VGG, etc. It also contains weights obtained by converting ImageNet weights from the same 2D models.

This repository is based on great classification_models repo by @qubvel

Architectures:

Installation

pip install classification-models-3D

Examples

Loading model with imagenet weights:
# for keras
from classification_models_3D.keras import Classifiers

# for tensorflow.keras
# from classification_models_3D.tfkeras import Classifiers

ResNet18, preprocess_input = Classifiers.get('resnet18')
model = ResNet18(input_shape=(128, 128, 128, 3), weights='imagenet')

All possible nets for Classifiers.get() method: 'resnet18, 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'seresnet18', 'seresnet34', 'seresnet50', 'seresnet101', 'seresnet152', 'seresnext50', 'seresnext101', 'senet154', 'resnext50', 'resnext101', 'vgg16', 'vgg19', 'densenet121', 'densenet169', 'densenet201', 'inceptionresnetv2', 'inceptionv3', 'mobilenet', 'mobilenetv2'

Convert imagenet weights (2D -> 3D)

Code to convert 2D imagenet weights to 3D variant is available here: convert_imagenet_weights_to_3D_models.py. Weights were obtained with TF2, but works OK with Keras + TF1 as well.

How to choose input shape

If initial 2D model had shape (512, 512, 3) then you can use shape (D, H, W, 3) where D * H * W ~= 512*512, so something like (64, 64, 64, 3) will be ok.

Training with single NVIDIA 1080Ti (11 GB) worked with:

  • DenseNet121, DenseNet169 and ResNet50 with shape (96, 128, 128, 3) and batch size 6
  • DenseNet201 with shape (96, 128, 128, 3) and batch size 5
  • ResNet18 with shape (128, 160, 160, 3) and batch size 6

Related repositories

Unresolved problems

  • There is no DepthwiseConv3D layer in keras, so repo used custom layer from this repo by @alexandrosstergiou which can be slower than native implementation.
  • There is no imagenet weights for 'inceptionresnetv2' and 'inceptionv3'.

Description

This code was used to get 1st place in DrivenData: Advance Alzheimer’s Research with Stall Catchers competition.

More details on ArXiv: https://arxiv.org/abs/2104.01687

Citation

If you find this code useful, please cite it as:

@InProceedings{RSolovyev_2021_stalled,
  author = {Solovyev, Roman and Kalinin, Alexandr A. and Gabruseva, Tatiana},
  title = {3D Convolutional Neural Networks for Stalled Brain Capillary Detection},
  booktitle = {Arxiv: 2104.01687},
  month = {April},
  year = {2021}
}
Comments
  • Update __init__.py

    Update __init__.py

    Using keras 2.9.0, import keras_applications as ka gives the following error:- ModuleNotFoundError: No module named 'keras_applications'

    Instead using from keras import applications as ka works!

    opened by msmuskan 0
  • Pushing current version to PyPI

    Pushing current version to PyPI

    Hello @ZFTurbo,

    if you have time, please push the current updated status (with ConvNeXt) of this repo to PyPI. :)

    Thanks again for the great work and your time!

    Cheers, Dominik

    opened by muellerdo 0
  • Grad cam issue

    Grad cam issue

    Hello ,

    base_model, preprocess_input = Classifiers.get('seresnext50') model = base_model(input_shape=(512, 512, 20, 1 ), weights=None , include_top = False ) x = Flatten()(model.output) x = Dense(1024, activation= 'sigmoid')(x) x = Dense(2, activation= 'sigmoid')(x)

    Trying to train a model , the accuracy is everything resides upto expectation, but the gradcam are quite off from the region of the focus - how the accuracy is good but the grad cam is off the focus of targeted area .

    Using the layer - 'activation-161' as output ref - https://github.com/fitushar/3D-Grad-CAM/blob/master/3DGrad-CAM.ipynb for the gradcam generation code , the results are always at the border of the image.

    opened by ntirupathirao18 0
  • ImportError: cannot import name 'VersionAwareLayers' from 'keras.layers'

    ImportError: cannot import name 'VersionAwareLayers' from 'keras.layers'

    Thank you for the great work.

    I am experiencing the following error over and over, even though I created a brand new tensorflow environment and installed all the necessary libraries in it. Could you please have a look on it and guide me how do I solve this problem? Thank you.

    ImportError: Unable to import 'VersionAwareLayers' from 'keras.layers' (/home/ubuntu/anaconda3/envs/cm_3d/lib/python3.7/site-packages/keras/layers/init.py)

    opened by nasir3843 2
  • 3D DenseNet

    3D DenseNet

    Hello and sorry to bother you beforehand,

    I am currently conducting my master thesis project and I am trying to implement a 3D DenseNet-121 with knee MRIs as input data. While I was searching on how to implement a 3D version of the DenseNet I came across your repository and tried to change it for my application.

    I have some issues regarding my try and I didn't know where else to ask about it and again I am sorry if I am completely of topic asking them here.

    Firstly, my input shapes are (250,320,18,1) and when I give them as input to the 3D DenseNet I developed with stride_size=1 for my Conv_block and pooling_size=(2,2,2) and strides=(2,2,1) for my AveragePooling3D layer in the transition block, the model is constructed properly with the specific input_size, while when I am trying to load a DenseNet121 from classification_models_3d.tfkeras classifiers I am unable to construct it with input_shape(250,320,18,1), stride_size=1 and kernel_size=2. It gives as an error "Negative dimension size... for node pool4_pool/AvgPool3D". Is there a way to specifically define the strides for AvgPool3D layer in the transition block?

    And secondly, I was thinking to load the 3D weights to my 3D DenseNet 121, is there a folder in your repository where I can find your pre-trained weights on imagenet??

    Again thank you for having this repository publicly available and sorry if I am completely of topic asking such things here.

    I look forward for you answer, Kind regards, Anastasis

    opened by alexopoulosanastasis 4
  • What are the limitations on Inceptionv3 input shape?

    What are the limitations on Inceptionv3 input shape?

    I seem to always get this error when I try to create InceptionV3 model no matter what input_shape. What are the limitations on input shape there?

    InvalidArgumentError: Negative dimension size caused by subtracting 3 from 2 for '{{node conv3d_314/Conv3D}} = 
    Conv3D[T=DT_FLOAT, data_format="NDHWC", dilations=[1, 1, 1, 1, 1], padding="VALID", strides=[1, 2, 2, 2, 1]](Placeholder, 
    conv3d_314/Conv3D/ReadVariableOp)' with input shapes: [?,2,17,17,192], [3,3,3,192,320].
    
    opened by mazatov 0
Releases(v1.0.4)
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

ALBERT ***************New March 28, 2020 *************** Add a colab tutorial to run fine-tuning for GLUE datasets. ***************New January 7, 2020

Google Research 3k Jan 01, 2023
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
A PyTorch Implementation of Single Shot MultiBox Detector

SSD: Single Shot MultiBox Object Detector, in PyTorch A PyTorch implementation of Single Shot MultiBox Detector from the 2016 paper by Wei Liu, Dragom

Max deGroot 4.8k Jan 07, 2023
DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab

DFL-Colab — DeepFaceLab fork for Google Colab This project provides you IPython Notebook to use DeepFaceLab with Google Colaboratory. You can create y

779 Jan 05, 2023
Deep motion transfer

animation-with-keypoint-mask Paper The right most square is the final result. Softmax mask (circles): \ Heatmap mask: \ conda env create -f environmen

9 Nov 01, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

202 Nov 18, 2022
Code for the Active Speakers in Context Paper (CVPR2020)

Active Speakers in Context This repo contains the official code and models for the "Active Speakers in Context" CVPR 2020 paper. Before Training The c

43 Oct 14, 2022
Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Gretel Trainer This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code w

Gretel.ai 24 Nov 03, 2022
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
Mmdetection3d Noted - MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch

Jiangjingwen 13 Jan 06, 2023
Deep learning algorithms for muon momentum estimation in the CMS Trigger System

Deep learning algorithms for muon momentum estimation in the CMS Trigger System The Compact Muon Solenoid (CMS) is a general-purpose detector at the L

anuragB 2 Oct 06, 2021
Reinforcement Learning via Supervised Learning

Reinforcement Learning via Supervised Learning Installation Run pip install -e . in an environment with Python = 3.7.0, 3.9. The code depends on MuJ

Scott Emmons 49 Nov 28, 2022
Object detection evaluation metrics using Python.

Object detection evaluation metrics using Python.

Louis Facun 2 Sep 06, 2022
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 04, 2023
Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"

Focal Transformer This is the official implementation of our Focal Transformer -- "Focal Self-attention for Local-Global Interactions in Vision Transf

Microsoft 486 Dec 20, 2022
Pytorch version of SfmLearner from Tinghui Zhou et al.

SfMLearner Pytorch version This codebase implements the system described in the paper: Unsupervised Learning of Depth and Ego-Motion from Video Tinghu

Clément Pinard 909 Dec 22, 2022
Dataset para entrenamiento de yoloV3 para 4 clases

Deteccion de objetos en video Este repo basado en el proyecto PyTorch YOLOv3 para correr detección de objetos sobre video. Construí sobre este proyect

1 Nov 01, 2021
Gems & Holiday Package Prediction

Predictive_Modelling Gems & Holiday Package Prediction This project is based on 2 cases studies : Gems Price Prediction and Holiday Package prediction

Avnika Mehta 1 Jan 27, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
The versatile ocean simulator, in pure Python, powered by JAX.

Veros is the versatile ocean simulator -- it aims to be a powerful tool that makes high-performance ocean modeling approachable and fun. Because Veros

TeamOcean 245 Dec 20, 2022