[peer review] An Arbitrary Scale Super-Resolution Approach for 3D MR Images using Implicit Neural Representation

Related tags

Deep LearningArSSR
Overview

ArSSR

This repository is the pytorch implementation of our manuscript "An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation" [ArXiv].

pipline

Figure 1: Oveview of the ArSSR model.

Abstract

High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In magnetic resonance imaging (MRI), restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3-dimensional (3D) HR image acquisition typically requests long scan time and, results in small spatial coverage and low signal-to-noise ratio (SNR). Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR and LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales. All the NIFTI data about Figure 2 can be downloaded in LR image, 2x SR result, 3.2x SR result, 4x SR result.

example

Figure 2: An example of the SISR tasks of three different isotropic up-sampling scales k={2, 3.2, 4} for a 3D brain MR image by the single ArSSR model.


1. Running Environment

  • python 3.7.9
  • pytorch-gpu 1.8.1
  • tensorboard 2.6.0
  • SimpleITK, tqdm, numpy, scipy, skimage

2. Pre-trained Models

In the pre_trained_models folder, we provide the three pre-trained ArSSR models (with three difference encoder networks) on HCP-1200 dataset. You can improve the resolution of your images thourgh the following commands:

python test.py -input_path [input_path] \
               -output_path [output_path] \
               -encoder_name [RDN, ResCNN, or SRResNet] \
               -pre_trained_model [pre_trained_model]
               -scale [scale] \
               -is_gpu [is_gpu] \
               -gpu [gpu]

where,

  • input_path is the path of LR input image, it should be not contain the input finename.

  • output_path is the path of outputs, it should be not contain the output finename.

  • encoder_name is the type of the encoder network, including RDN, ResCNN, or SRResNet.

  • pre_trained_model is the full-path of pre-trained ArSSR model (e.g, for ArSSR model with RDB encoder network: ./pre_trained_models/ArSSR_RDN.pkl).

  • !!! Note that here encoder_name and pre_trained_model have to be matched. E.g., if you use the ArSSR model with ResCNN encoder network, encoder_name should be ResCNN and pre_trained_model should be ./pre_trained_models/ArSSR_ResCNN.pkl

  • scale is up-sampling scale k, it can be int or float.

  • is_gpu is the identification of whether to use GPU (0->CPU, 1->GPU).

  • gpu is the numer of GPU.

3. Training from Scratch

3.1. Data

In our experiment, we train the ArSSR model on the HCP-1200 Dataset. In particular, the HCP-1200 dataset is split into three parts: 780 training set, 111 validation set, and 222 testing set. More details about the HCP-1200 can be found in our manuscript [ArXiv]. And you can download the pre-processed training set and validation set [Google Drive].

3.2. Training

By using the pre-processed trainning set and validationset by ourselves from [Google Drive], the pipline of training the ArSSR model can be divided into three steps:

  1. unzip the downloaed file data.zip.
  2. put the data in ArSSR directory.
  3. run the following command.
python train.py -encoder_name [encoder_name] \
                -decoder_depth [decoder_depth]	\
                -decoder_width [decoder_width] \
                -feature_dim [feature_dim] \
                -hr_data_train [hr_data_train] \
                -hr_data_val [hr_data_val] \
                -lr [lr] \
                -lr_decay_epoch [lr_decay_epoch] \
                -epoch [epoch] \
                -summary_epoch [summary_epoch] \
                -bs [bs] \
                -ss [ss] \
                -gpu [gpu]

where,

  • encoder_name is the type of the encoder network, including RDN, ResCNN, or SRResNet.
  • decoder_depth is the depth of the decoder network (default=8).
  • decoder_width is the width of the decoder network (default=256).
  • feature_dim is the dimension size of the feature vector (default=128)
  • hr_data_train is the file path of HR patches for training (if you use our pre-processd data, this item can be ignored).
  • hr_data_val is the file path of HR patches for validation (if you use our pre-processd data, this item can be ignored).
  • lr is the initial learning rate (default=1e-4).
  • lr_decay_epoch is learning rate multiply by 0.5 per some epochs (default=200).
  • epoch is the total number of epochs for training (default=2500).
  • summary_epoch is the current model will be saved per some epochs (default=200).
  • bs is the number of LR-HR patch pairs, i.e., N in Equ. 3 (default=15).
  • ss is the number of sampled voxel coordinates, i.e., K in Equ. 3 (default=8000).
  • gpu is the number of GPU.

4. Citation

If you find our work useful in your research, please cite:

@misc{wu2021arbitrary,
      title={An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation}, 
      author={Qing Wu and Yuwei Li and Yawen Sun and Yan Zhou and Hongjiang Wei and Jingyi Yu and Yuyao Zhang},
      year={2021},
      eprint={2110.14476},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}
Owner
Qing Wu
Qing Wu
GoodNews Everyone! Context driven entity aware captioning for news images

This is the code for a CVPR 2019 paper, called GoodNews Everyone! Context driven entity aware captioning for news images. Enjoy! Model preview: Huge T

117 Dec 19, 2022
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

182 Dec 23, 2022
Lab course materials for IEMBA 8/9 course "Coding and Artificial Intelligence"

IEMBA 8/9 - Coding and Artificial Intelligence Dear IEMBA 8/9 students, welcome to our IEMBA 8/9 elective course Coding and Artificial Intelligence, t

Artificial Intelligence & Machine Learning (AI:ML Lab) @ HSG 1 Jan 11, 2022
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
Live Hand Tracking Using Python

Live-Hand-Tracking-Using-Python Project Description: In this project, we will be

Hassan Shahzad 2 Jan 06, 2022
Implementation of parameterized soft-exponential activation function.

Soft-Exponential-Activation-Function: Implementation of parameterized soft-exponential activation function. In this implementation, the parameters are

Shuvrajeet Das 1 Feb 23, 2022
ScaleNet: A Shallow Architecture for Scale Estimation

ScaleNet: A Shallow Architecture for Scale Estimation Repository for the code of ScaleNet paper: "ScaleNet: A Shallow Architecture for Scale Estimatio

Axel Barroso 34 Nov 09, 2022
A Repository of Community-Driven Natural Instructions

A Repository of Community-Driven Natural Instructions TLDR; this repository maintains a community effort to create a large collection of tasks and the

AI2 244 Jan 04, 2023
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.

Multimodal Deep Learning 🎆 🎆 🎆 Announcing the multimodal deep learning repository that contains implementation of various deep learning-based model

Deep Cognition and Language Research (DeCLaRe) Lab 398 Dec 30, 2022
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.

PySlowFast PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficie

Meta Research 5.3k Jan 03, 2023
Extremely simple and fast extreme multi-class and multi-label classifiers.

napkinXC napkinXC is an extremely simple and fast library for extreme multi-class and multi-label classification, that focus of implementing various m

Marek Wydmuch 43 Nov 14, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
Code release for ConvNeXt model

A ConvNet for the 2020s Official PyTorch implementation of ConvNeXt, from the following paper: A ConvNet for the 2020s. arXiv 2022. Zhuang Liu, Hanzi

Meta Research 4.6k Jan 08, 2023
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022
Resources complimenting the Machine Learning Course led in the Faculty of mathematics and informatics part of Sofia University.

Machine Learning and Data Mining, Summer 2021-2022 How to learn data science and machine learning? Programming. Learn Python. Basic Statistics. Take a

Simeon Hristov 8 Oct 04, 2022
Very Deep Convolutional Networks for Large-Scale Image Recognition

pytorch-vgg Some scripts to convert the VGG-16 and VGG-19 models [1] from Caffe to PyTorch. The converted models can be used with the PyTorch model zo

Justin Johnson 217 Dec 05, 2022