Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

Related tags

Deep Learningpmapper
Overview

pmapper

pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and adaptable algorithm for these problems. An implementation of the contemporary Richardson-Lucy algorithm is included for comparison.

The name of this repository is an homage to MTF-Mapper, a slanted edge MTF measurement program written by Frans van den Bergh.

The implementations of all algorithms in this repository are CPU/GPU agnostic and performant, able to perform 4K restoration at hundreds of iterations per second.

Usage

Basic PMAP, Multi-frame PMAP

import pmapper

img = ... # load an image somehow
psf = ... # acquire the PSF associated with the img
pmp = pmapper.PMAP(img, psf)  # "PMAP problem"
while pmp.iter < 100:  # number of iterations
    fHat = pmp.step()  # fHat is the object estimate

In simulation studies, the true object can be compared to fHat (for example, mean square error) to track convergence. If the psf is "larger" than the image, for example a 1024x1024 image and a 2048x2048 psf, the output will be super-resolved at the 2048x2048 resolution.

PMAP is able to combine multiple images of the same objec with different PSFs into one with the multi-frame variant. This can be used to combat dynamical atmospheric seeing conditions, line of sight jitter, or even perform incoherent aperture synthesis; rendering images from sparse aperture systems that mimic or exceed a system with a fully filled aperture.

import pmapper

# load a sequence of images; could be any iterable,
# or e.g. a kxmxn ndarray, with k = num frames
# psfs must have the same "size" (k) and correspond
# to the images in the same indices
imgs = ...
psfs = ...
pmp = pmapper.MFPMAP(imgs, psfs)  # "PMAP problem"
while pmp.iter < len(imgs)*100:  # number of iterations
    fHat = pmp.step()  # fHat is the object estimate

Multi-frame PMAP cycles through the images and PSFs, so the total number of iterations "should" be an integer multiple of the number of source images. In this way, each image is "visited" an equal number of times.

GPU computing

As mentioned previously, pmapper can be used trivially on a GPU. To do so, simply execute the following modification:

import pmapper
from pmapper import backend

import cupy as cp
from cupyx.scipy import (
    ndimage as cpndimage,
    fft as cpfft
)

backend.np._srcmodule = cp
backend.fft.fft = cpfft
backend.ndimage._srcmodule = cpndimage

# if your data is not on the GPU already
img = cp.array(img)
psf = cp.array(psf)

# ... do PMAP, it will run on a GPU without changing anything about your code

fHatCPU = fHat.get()

cupy is not the only way to do so; anything that quacks like numpy, scipy fft, and scipy ndimage can be used to substitute the backend. This can be done dynamically and at runtime. You likely will want to cast your imagery from fp64 to fp32 for performance scaling on the GPU.

Owner
NASA Jet Propulsion Laboratory
A world leader in the robotic exploration of space
NASA Jet Propulsion Laboratory
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

TorchRL Disclaimer This library is not officially released yet and is subject to change. The features are available before an official release so that

Meta Research 860 Jan 07, 2023
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 07, 2023
Omniscient Video Super-Resolution

Omniscient Video Super-Resolution This is the official code of OVSR (Omniscient Video Super-Resolution, ICCV 2021). This work is based on PFNL. Datase

36 Oct 27, 2022
UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems

[ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui*, Jianchao Tan, Zhangyang Wang, and Ji Liu

VITA 39 Dec 03, 2022
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 07, 2021
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
Chess reinforcement learning by AlphaGo Zero methods.

About Chess reinforcement learning by AlphaGo Zero methods. This project is based on these main resources: DeepMind's Oct 19th publication: Mastering

Samuel 2k Dec 29, 2022
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
STEM: An approach to Multi-source Domain Adaptation with Guarantees

STEM: An approach to Multi-source Domain Adaptation with Guarantees Introduction This is the official implementation of ``STEM: An approach to Multi-s

5 Dec 19, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 07, 2023
Data Consistency for Magnetic Resonance Imaging

Data Consistency for Magnetic Resonance Imaging Data Consistency (DC) is crucial for generalization in multi-modal MRI data and robustness in detectin

Dimitris Karkalousos 19 Dec 12, 2022
Neural models of common sense. 🤖

Unicorn on Rainbow Neural models of common sense. This repository is for the paper: Unicorn on Rainbow: A Universal Commonsense Reasoning Model on a N

AI2 60 Jan 05, 2023
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

DatasetGAN This is the official code and data release for: DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort Yuxuan Zhang*, Huan Li

302 Jan 05, 2023
offical implement of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021

LifelongReID Offical implementation of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021 by Nan Pu, Wei Chen, Yu L

PeterPu 76 Dec 08, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
Multi-Modal Machine Learning toolkit based on PaddlePaddle.

简体中文 | English PaddleMM 简介 飞桨多模态学习工具包 PaddleMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 PaddleMM 初始版本 v1.0 特性 丰富的任务

njustkmg 520 Dec 28, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
Anonymous implementation of KSL

k-Step Latent (KSL) Implementation of k-Step Latent (KSL) in PyTorch. Representation Learning for Data-Efficient Reinforcement Learning [Paper] Code i

1 Nov 10, 2021