An implementation of the AdaOPS (Adaptive Online Packing-based Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface.

Overview

AdaOPS

Build Status

Coverage Status

codecov.io

An implementation of the AdaOPS (Adaptive Online Packing-guided Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface. The paper of AdaOPS was published on NeurIPS'2021.

If you are trying to use this package and require more documentation, please file an issue!

Installation

Press ] key to enter the package management mode of Julia. Then, execute the following code.

pkg> add "POMDPs"
pkg> registry add "https://github.com/JuliaPOMDP/Registry.git"
pkg> add AdaOPS

Usage

using POMDPs, POMDPModels, POMDPSimulators, AdaOPS

pomdp = TigerPOMDP()

solver = AdaOPSSolver(bounds=IndependentBounds(-20.0, 0.0))
planner = solve(solver, pomdp)

for (s, a, o) in stepthrough(pomdp, planner, "s,a,o", max_steps=10)
    println("State was $s,")
    println("action $a was taken,")
    println("and observation $o was received.\n")
end

For minimal examples of problem implementations, see this notebook and the POMDPs.jl generative docs.

Solver Options

Solver options can be found in the AdaOPSSolver docstring and accessed using Julia's built in documentation system (or directly in the Solver source code). Each option has its own docstring and can be set with a keyword argument in the AdaOPSSolver constructor.

Belief Packing

delta

A δ-packing of observation branches will be generated, i.e., the belief nodes with L1 distance less than delta are merged.

Adaptive Particle Filter

The core idea of the adaptive particle filter is that it can change the number of particles adaptively and use more particles to estimate the belief when needed.

grid

grid is used to split the state space into multidimensional bins, so that KLD-Sampling can estimate the particle numbers according to the number of bins occupied. First, a function for converting a state to a multidimensional vector should be implemented, i.e., Base.convert(::Type{SVector{D, Float64}},::S), where D is the dimension of the resulted vector. Then, we define a StateGrid to discretize or split the state space. A StateGrid is consist of a vector of cutpoints in each dimension. These cutpoints divide the whole space into small tiles. In each dimension, a number of intervals constitute the grid, and each of these intervals is left-closed and right-open with the endpoints be cutpoints with the exception of the left-most interval. For example, a StateGrid can be defined as StateGrid([dim1_cutpoints], [dim2_cutpoints], [dim3_cutpoints]). All states lie in one tile will be taken as the same. With the number of tiles (bins) occupied, we can estimate the number of particles using KLD-Sampling.

max_occupied_bins

max_occupied_bins is the maximum number of bins occupied by a belief. Normally, it is exactly the grid size. However, in some domains, such as Roomba, only states within the room is accessible, and the corresponding bins will never be occupied.

min_occupied_bins

min_occupied_bins is the minimum number of bins occupied by a belief. Normally, it default to 2. A belief occupying min_occupied_bins tiles will be estimated with m_min particles. Increasing min_occupied_bins indicates that a belief need to occupy more bins so as to be estimated by the same amount of particles.

m_min

m_min is the minimum number of particles used for approximating beliefs.

m_max

m_max is the maximum number of particles used for approximating a belief. Normally, m_max is set to be big enough so that KLD-Sampling determines the number of particles used. When the KLD-Sampling is disabled, i.e. grid=StateGrid(), m_max will be sampled during the resampling.

zeta

zeta is the target error when estimating a belief. Spcifically, we use KLD Sampling to calculate the number of particles needed, where zeta is the targe Kullback-Leibler divergence between the estimated belief and the true belief. In AdaOPS, zeta is automatically adjusted according to the minimum number of bins occupied such that the minimum number of particles KLD-Sampling method suggests is exactly m_min.

Bounds

Dependent bounds

The bound passed into AdaOPSSolver can be a function in the form of lower_bound, upper_bound = f(pomdp, wpf_belief), or any other objects for which a AdaOPS.bounds(obj::OBJECT, pomdp::POMDP, b::WPFBelief, max_depth::Int, bounds_warning::Bool) function is implemented.

Independent bounds

In most cases, the recommended way to specify bounds is with an IndependentBounds object, i.e.

AdaOPSSolver(bounds=IndependentBounds(lower, upper))

where lower and upper are either a number, a function or some other objects (see below).

Often, the lower bound is calculated with a default policy, this can be accomplished using a PORollout, FORollout or RolloutEstimator. For the in-depth details, please refer to BasicPOMCP. Note that when mixing the Rollout structs from this package with those from BasicPOMCP, you should prefix the struct name with module name.

Both the lower and upper bounds can be initialized with value estimations using a FOValue or POValue. FOValue support any offline MDP Solver or Policy. POValue support any offline POMDP Solver or Policy.

If lower or upper is a function, it should handle two arguments. The first is the POMDP object and the second is the WPFBelief. To access the state particles in a WPFBelief b, use particles(b). To access the corresponding weights of particles in a WPFBelief b, use weights(b). All AbstractParticleBelief APIs are supported for WPFBelief. More details can be found in the solver source code.

If an object o is passed in, AdaOPS.bound(o, pomdp::POMDP, b::WPFBelief, max_depth::Int) will be called.

In most cases, the check_terminal and consistency_fix_thresh keyword arguments of IndependentBounds should be used to add robustness (see the IndependentBounds docstring for more info). When using rollout-base bounds, you can specify max_depth keyword argument to set the max depth of rollout.

Example

For the BabyPOMDP from POMDPModels, bounds setup might look like this:

using POMDPModels
using POMDPPolicies
using BasicPOMCP

always_feed = FunctionPolicy(b->true)
lower = FORollout(always_feed)

function upper(pomdp::BabyPOMDP, b::WPFBelief)
    if all(s==true for s in particles(b)) # all particles are hungry
        return pomdp.r_hungry # the baby is hungry this time, but then becomes full magically and stays that way forever
    else
        return 0.0 # the baby magically stays full forever
    end
end

solver = AdaOPSSolver(bounds=IndependentBounds(lower, upper))

Visualization

D3Trees.jl can be used to visualize the search tree, for example

using POMDPs, POMDPModels, POMDPModelTools, D3Trees, AdaOPS

pomdp = TigerPOMDP()

solver = AdaOPSSolver(bounds=(-20.0, 0.0), tree_in_info=true)
planner = solve(solver, pomdp)
b0 = initialstate(pomdp)

a, info = action_info(planner, b0)
inchrome(D3Tree(info[:tree], init_expand=5))

will create an interactive tree.

Analysis

Two utilities, namely info_analysis and hist_analysis, are provided for getting a sense of how the algorithm is working. info_analysis takes the infomation returned from action_info(planner, b0). It will first visualize the tree if the tree_in_info option is turned on. Then it will show stats such as number nodes expanded, total explorations, average observation branches, and so on. hist_analysis takes the hist from HistoryRecorder simulator. It will show similar stats as info_analysis but in the form of figures. It should be noted that HistoryRecoder will store the tree of each single step, which makes it memory-intensive. An example is shown as follows.

using POMDPs, AdaOPS, RockSample, POMDPSimulators, ParticleFilters, POMDPModelTools

m = RockSamplePOMDP(11, 11)

b0 = initialstate(m)
s0 = rand(b0)

bound = AdaOPS.IndependentBounds(FORollout(RSExitSolver()), FOValue(RSMDPSolver()), check_terminal=true, consistency_fix_thresh=1e-5)

solver = AdaOPSSolver(bounds=bound,
                        delta=0.3,
                        m_min=30,
                        m_max=200,
                        tree_in_info=true,
                        num_b=10_000
                        )

adaops = solve(solver, m)
a, info = action_info(adaops, b0)
info_analysis(info)

num_particles = 30000
@time hist = simulate(HistoryRecorder(max_steps=90), m, adaops, SIRParticleFilter(m, num_particles), b0, s0)
hist_analysis(hist)
@show undiscounted_reward(hist)

Reference

@inproceedings{wu2021adaptive,
  title={Adaptive Online Packing-guided Search for POMDPs},
  author={Wu, Chenyang and Yang, Guoyu and Zhang, Zongzhang and Yu, Yang and Li, Dong and Liu, Wulong and others},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}
You might also like...
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

Minimal PyTorch implementation of Generative Latent Optimization from the paper
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Online Pseudo Label Generation by Hierarchical Cluster Dynamics for Adaptive Person Re-identification

Online Pseudo Label Generation by Hierarchical Cluster Dynamics for Adaptive Person Re-identification

Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

Camview - A CLI-tool used to stream CCTV online footage based on URL params
Camview - A CLI-tool used to stream CCTV online footage based on URL params

CamView A CLI-tool used to stream CCTV online footage based on URL params Get St

Wordplay, an artificial Intelligence based crossword puzzle solver.

Wordplay, AI based crossword puzzle solver A crossword is a word puzzle that usually takes the form of a square or a rectangular grid of white- and bl

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surface-emitting lasers, nano-antennas, and more.

A SAT-based sudoku solver
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Comments
  • TagBot trigger issue

    TagBot trigger issue

    This issue is used to trigger TagBot; feel free to unsubscribe.

    If you haven't already, you should update your TagBot.yml to include issue comment triggers. Please see this post on Discourse for instructions and more details.

    If you'd like for me to do this for you, comment TagBot fix on this issue. I'll open a PR within a few hours, please be patient!

    opened by JuliaTagBot 7
  • Unintuitive default timeout warning threshold

    Unintuitive default timeout warning threshold

    https://github.com/LAMDA-POMDP/AdaOPS.jl/blob/e8593ebec82f064efd08b2b34301dbb1011281aa/src/AdaOPS.jl#L143

    https://github.com/LAMDA-POMDP/AdaOPS.jl/blob/e8593ebec82f064efd08b2b34301dbb1011281aa/src/planner.jl#L15

    Default results in timeout warnings whenever planning time exceeds 2*T_max^2 . For planning times less than 1 second this results in far too many warnings printed to screen.

    For example, if we have T_max = 0.01, then the warning is triggered whenever planning time exceeds 0.0002s, which would be at every action call, provided that the max_trials option is set to be sufficiently high.

    Also, it may be worth looking at this issue seeing as CPUTime seems to have a lot of overhead, especially for problems with quick gen functions

    opened by WhiffleFish 1
  • CompatHelper: bump compat for

    CompatHelper: bump compat for "Distributions" to "0.25"

    This pull request changes the compat entry for the Distributions package from 0.22 - 0.24 to 0.22 - 0.24, 0.25.

    This keeps the compat entries for earlier versions.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request.

    opened by github-actions[bot] 1
  • CompatHelper: add new compat entry for

    CompatHelper: add new compat entry for "POMDPModels" at version "0.4"

    This pull request sets the compat entry for the POMDPModels package to 0.4.

    This is a brand new compat entry. Previously, you did not have a compat entry for the POMDPModels package.

    Note: I have not tested your package with this new compat entry. It is your responsibility to make sure that your package tests pass before you merge this pull request. Note: Consider tagging a patch release immediately after merging this PR, as downstream packages may depend on this for tests to pass.

    opened by github-actions[bot] 0
Releases(v0.5.3)
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Official code of Retinal Vessel Segmentation with Pixel-wise Adaptive Filters and Consistency Training (ISBI 2022)

anonymous 14 Oct 27, 2022
code for generating data set ES-ImageNet with corresponding training code

es-imagenet-master code for generating data set ES-ImageNet with corresponding training code dataset generator some codes of ODG algorithm The variabl

Ordinarabbit 18 Dec 25, 2022
Official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

This is the official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

peng gao 42 Nov 26, 2022
RAMA: Rapid algorithm for multicut problem

RAMA: Rapid algorithm for multicut problem Solves multicut (correlation clustering) problems orders of magnitude faster than CPU based solvers without

Paul Swoboda 60 Dec 13, 2022
Automatic deep learning for image classification.

AutoDL AutoDL automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few line

wenqi 2 Oct 12, 2022
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
StyleGAN - Official TensorFlow Implementation

StyleGAN — Official TensorFlow Implementation Picture: These people are not real – they were produced by our generator that allows control over differ

NVIDIA Research Projects 13.1k Jan 09, 2023
Realtime micro-expression recognition using OpenCV and PyTorch

Micro-expression Recognition Realtime micro-expression recognition from scratch using OpenCV and PyTorch Try it out with a webcam or video using the e

Irfan 35 Dec 05, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"

Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Ro

Meta Research 1.2k Jan 02, 2023
Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Joint Learning of 3D Shape Retrieval and Deformation Joint Learning of 3D Shape Retrieval and Deformation Mikaela Angelina Uy, Vladimir G. Kim, Minhyu

Mikaela Uy 38 Oct 18, 2022
PyArmadillo: an alternative approach to linear algebra in Python

PyArmadillo is a linear algebra library for the Python language, with an emphasis on ease of use.

Terry Zhuo 58 Oct 11, 2022
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

25 Jul 20, 2022
TensorFlow implementation of the paper "Hierarchical Attention Networks for Document Classification"

Hierarchical Attention Networks for Document Classification This is an implementation of the paper Hierarchical Attention Networks for Document Classi

Quoc-Tuan Truong 83 Dec 05, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 07, 2023
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 09, 2021
Multi-Glimpse Network With Python

Multi-Glimpse Network Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention arXiv Require

9 May 10, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022