Motion planning environment for Sampling-based Planners

Overview

Sampling-Based Motion Planners' Testing Environment

Python version CI Build docs Code style: black License DOI

Sampling-based motion planners' testing environment (sbp-env) is a full feature framework to quickly test different sampling-based algorithms for motion planning. sbp-env focuses on the flexibility of tinkering with different aspects of the framework, and had divided the main planning components into two categories (i) samplers and (ii) planners.

The focus of motion planning research had been mainly on (i) improving the sampling efficiency (with methods such as heuristic or learned distribution) and (ii) the algorithmic aspect of the planner using different routines to build a connected graph. Therefore, by separating the two components one can quickly swap out different components to test novel ideas.

Have a look at the documentations for more detail information. If you are looking for the previous code for the RRdT* paper it is now archived at soraxas/rrdt.

Installation

Optional

I recommend first creates a virtual environment with

# assumes python3 and bash shell
python -m venv sbp_env
source sbp_env/bin/activate

Install dependencies

You can install all the needed packages with pip.

pip install -r requirements.txt

There is also an optional dependency on klampt if you want to use the 3D simulator. Refer to its installation guide for details.

Quick Guide

You can get a detailed help message with

python main.py --help

but the basic syntax is

python main.py <PLANNER> <MAP> [options]

It will open a new window that display a map on it. Every white pixel is assumed to be free, and non-white pixels are obstacles. You will need to use your mouse to select two points on the map, the first will be set as the starting point and the second as the goal point.

Demos

Run maps with different available Planners

This repository contains a framework to performs quick experiments for Sampling-Based Planners (SBPs) that are implemented in Python. The followings are planners that had implemented and experimented in this framework.

Note that the commands shown in the respective demos can be customised with additional options. In fact, the actual command format used for the demonstrations is

python main.py <PLANNER> maps/room1.png start <sx>,<sy> goal <sx>,<sy> -vv

to have a fix set of starting and goal points for consistent visualisation, but we omitted the start/goal options in the following commands for clarity.

RRdT*

python main.py rrdt maps/room1.png -vv

RRdT* Planner

RRT*

python main.py rrt maps/room1.png -vv

RRT* Planner

Bi-RRT*

python main.py birrt maps/room1.png -vv

Bi-RRT* Planner

Informed RRT*

python main.py informedrrt maps/room1.png -vv

Informed RRT* Planner

The red ellipse shown is the dynamic sampling area for Informed RRT*

Others

There are also some other planners included in this repository. Some are preliminary planner that inspired RRdT*, some are planners with preliminary ideas, and some are useful for debugging.

Reference to this repository

You can use the following citation if you use this repository for your research

@article{lai2021SbpEnv,
  doi = {10.21105/joss.03782},
  url = {https://doi.org/10.21105/joss.03782},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {66},
  pages = {3782},
  author = {Tin Lai},
  title = {sbp-env: A Python Package for Sampling-based Motion Planner and Samplers},
  journal = {Journal of Open Source Software}
}
Comments
  • question on (example) usage

    question on (example) usage

    According to the submitted paper, with sbp-env "one can quickly swap out different components to test novel ideas" and "validate ... hypothesis rapidly". However, from the examples in the documentation, it is unclear to me how I can obtain performance metrics on the planners when a run a test.

    Is there a way to save such metrics to a file or print them when running planners in sbp-env? If not, this might be a nice feature to implement in a future version. Otherwise, you could consider adding an example to the documentation on how to compare different planners in the same scenario.

    (this question is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 5
  • Path recognition issue

    Path recognition issue

    I tried some source, destination positions with the following command and there seems some issue in recognition of the path. python main.py rrt maps/4d.png --engine 4d

    Attaching screenshot below: Screenshot from 2021-10-07 00-43-15

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Python version compatibility with scipy

    Python version compatibility with scipy

    Mentioning the requirement of python version >= 3.8 in README would also help users the way it's done over here. Python versions < 3.8 are not compatible with scipy 1.6

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 3
  • Suggestion to make installation easier

    Suggestion to make installation easier

    I was wondering why you have the following remark block in your installation instructions: image

    I think it would be easier to add those two packages to the file requirements_klampt.txt. That way they'll be installed automatically, it saves the user an extra action. Or is there any reason I'm missing why that shouldn't be done?

    opened by OlgerSiebinga 3
  • Exception after running the example from the documentation

    Exception after running the example from the documentation

    When I run the example from the quick start page in the documentation, an exception occurs.

    The command: python main.py rrt maps/room1.png

    The exception:

    Traceback (most recent call last):
      File "main.py", line 287, in <module>
        environment.run()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 198, in run
        self.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\visualiser.py", line 148, in terminates_hook
        self.env_instance.sampler.visualiser.terminates_hook()
      File "C:\Users\Olger\PycharmProjects\sbp-env\env.py", line 126, in __getattr__
        return object.__getattribute__(self.visualiser, attr)
    AttributeError: 'PygameEnvVisualiser' object has no attribute 'sampler'
    

    The exception only occurs after the simulation has finished so it seems like a minor problem. Although I'm not really sure what happens at env.py, line 126, in __getattr__ and why. So, I don't have a proposed fix.

    opened by OlgerSiebinga 2
  • invalid start and goal point can be specified with command-line interface

    invalid start and goal point can be specified with command-line interface

    When specifying a goal and start point in the commands line, it is possible to specify invalid points. Specifying an invalid start and goal will result in an infinite loop.

    For example, running: python main.py rrt maps\room1.png start 10,10 goal 15,15, will result in an infinite loop with the following GUI:

    image

    Expected behavior when supplying an invalid option would be an exception.

    opened by OlgerSiebinga 1
  • Test Instructions

    Test Instructions

    Though it's standard, adding instruction to run tests in the documentation might be helpful for users wanting to contribute.

    (Part of the JOSS review openjournals/joss-reviews#3782)

    opened by KanishAnand 1
  • Graph building of prm planner without user information

    Graph building of prm planner without user information

    The graph building method in the prm planner (build_graph() in prmPlanner.py) can take quite some time when a large number of nodes is used. However, the user is not notified that the planner is still processing data. The first time I encountered this, I suspected the software got stuck in an infinite loop because the window was not responding anymore. I think this can be easily fixed by adding a tqdm bar in the build_graph() method (at line 83)

    (this suggestion is part of the JOSS review openjournals/joss-reviews#3782)

    opened by OlgerSiebinga 1
  • Skip-optimality Problem

    Skip-optimality Problem

    Hi 1.I am wonderingt that the parameter (use_rtree)in choose_least_cost_parent() function and rewire() funtion (RRT). Is it no longer necessary because we use numpy's calculation method? 2. When i run the informedrrt algorithm, the ellipse display of the graphic drawing does not appear as shown in the document. How can it be displayed? I'm sorry to interrupt you from your busy schedule.

    opened by Jiawei-00 7
Releases(v2.0.1)
This repository for project that can Automate Number Plate Recognition (ANPR) in Morocco Licensed Vehicles. 💻 + 🚙 + 🇲🇦 = 🤖 🕵🏻‍♂️

MoroccoAI Data Challenge (Edition #001) This Reposotory is result of our work in the comepetiton organized by MoroccoAI in the context of the first Mo

SAFOINE EL KHABICH 14 Oct 31, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
discovering subdomains, hidden paths, extracting unique links

python-website-crawler discovering subdomains, hidden paths, extracting unique links pip install -r requirements.txt discover subdomain: You can give

merve 4 Sep 05, 2022
Permeability Prediction Via Multi Scale 3D CNN

Permeability-Prediction-Via-Multi-Scale-3D-CNN Data: The raw CT rock cores are obtained from the Imperial Colloge portal. The CT rock cores are sub-sa

Mohamed Elmorsy 2 Jul 06, 2022
3rd Place Solution of the Traffic4Cast Core Challenge @ NeurIPS 2021

3rd Place Solution of Traffic4Cast 2021 Core Challenge This is the code for our solution to the NeurIPS 2021 Traffic4Cast Core Challenge. Paper Our so

7 Jul 25, 2022
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

77 Dec 24, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
Gym environment for FLIPIT: The Game of "Stealthy Takeover"

gym-flipit Gym environment for FLIPIT: The Game of "Stealthy Takeover" invented by Marten van Dijk, Ari Juels, Alina Oprea, and Ronald L. Rivest. Desi

Lisa Oakley 2 Dec 15, 2021
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization

On the Limits of Pseudo Ground Truth in Visual Camera Re-Localization This repository contains the evaluation code and alternative pseudo ground truth

Torsten Sattler 36 Dec 22, 2022
i-RevNet Pytorch Code

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succ

Jörn Jacobsen 378 Dec 06, 2022
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
Dynamic Environments with Deformable Objects (DEDO)

DEDO - Dynamic Environments with Deformable Objects DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed

Rika 32 Dec 22, 2022
Official Repsoitory for "Mish: A Self Regularized Non-Monotonic Neural Activation Function" [BMVC 2020]

Mish: Self Regularized Non-Monotonic Activation Function BMVC 2020 (Official Paper) Notes: (Click to expand) A considerably faster version based on CU

Xa9aX ツ 1.2k Dec 29, 2022
sssegmentation is a general framework for our research on strongly supervised semantic segmentation.

sssegmentation is a general framework for our research on strongly supervised semantic segmentation.

445 Jan 02, 2023
Probabilistic Cross-Modal Embedding (PCME) CVPR 2021

Probabilistic Cross-Modal Embedding (PCME) CVPR 2021 Official Pytorch implementation of PCME | Paper Sanghyuk Chun1 Seong Joon Oh1 Rafael Sampaio de R

NAVER AI 87 Dec 21, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

75 Dec 16, 2022
The FIRST GANs-based omics-to-omics translation framework

OmiTrans Please also have a look at our multi-omics multi-task DL freamwork 👀 : OmiEmbed The FIRST GANs-based omics-to-omics translation framework Xi

Xiaoyu Zhang 6 Dec 14, 2022
Bayesian algorithm execution (BAX)

Bayesian Algorithm Execution (BAX) Code for the paper: Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mut

Willie Neiswanger 38 Dec 08, 2022