Elevation Mapping on GPU.

Overview

Elevation Mapping cupy

Overview

This is a ros package of elevation mapping on GPU.
Code are written in python and uses cupy for GPU calculation.
screenshot

* plane segmentation is coming soon.

Citing

Takahiro Miki, Lorenz Wellhausen, Ruben Grandia, Fabian Jenelten, Timon Homberger, Marco Hutter
Elevation Mapping for Locomotion and Navigation using GPU arXiv

@misc{https://doi.org/10.48550/arxiv.2204.12876,
  doi = {10.48550/ARXIV.2204.12876},
  
  url = {https://arxiv.org/abs/2204.12876},
  
  author = {Miki, Takahiro and Wellhausen, Lorenz and Grandia, Ruben and Jenelten, Fabian and Homberger, Timon and Hutter, Marco},
  
  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Elevation Mapping for Locomotion and Navigation using GPU},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}

Installation

CUDA & cuDNN

The tested versions are CUDA10.2, 11.6

CUDA
cuDNN.

Check how to install here.

Python dependencies

You will need

For traversability filter, either of

Optinally, opencv for inpainting filter.

Install numpy, scipy, shapely, opencv-python with the following command.

pip3 install -r requirements.txt

Cupy

cupy can be installed with specific CUDA versions. (On jetson, only "from source" i.e. pip install cupy could work)

For CUDA 10.2 pip install cupy-cuda102

For CUDA 11.0 pip install cupy-cuda110

For CUDA 11.1 pip install cupy-cuda111

For CUDA 11.2 pip install cupy-cuda112

For CUDA 11.3 pip install cupy-cuda113

For CUDA 11.4 pip install cupy-cuda114

For CUDA 11.5 pip install cupy-cuda115

For CUDA 11.6 pip install cupy-cuda116

(Install CuPy from source) % pip install cupy

Traversability filter

You can choose either pytorch, or chainer to run the CNN based traversability filter.
Install by following the official documents.

Pytorch uses ~2GB more GPU memory than Chainer, but runs a bit faster.
Use parameter use_chainer to select which backend to use.

ROS package dependencies

sudo apt install ros-noetic-pybind11-catkin
sudo apt install ros-noetic-grid-map-core ros-noetic-grid-map-msgs

On Jetson

CUDA CuDNN

CUDA and cuDNN can be installed via apt. It comes with nvidia-jetpack. The tested version is jetpack 4.5 with L4T 32.5.0.

python dependencies

On jetson, you need the version for its CPU arch:

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
pip3 install Cython
pip3 install numpy==1.19.5 torch-1.8.0-cp36-cp36m-linux_aarch64.whl

Also, you need to install cupy with

pip3 install cupy

This builds the packages from source so it would take time.

ROS dependencies

sudo apt install ros-melodic-pybind11-catkin
sudo apt install ros-melodic-grid-map-core ros-melodic-grid-map-msgs

Also, on jetson you need fortran (should already be installed).

sudo apt install gfortran

If the Jetson is set up with Jetpack 4.5 with ROS Melodic the following package is additionally required:

git clone [email protected]:ros/filters.git -b noetic-devel

Usage

Build

catkin build elevation_mapping_cupy

Errors

If you get error such as

Make Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
  Could NOT find PythonInterp: Found unsuitable version "2.7.18", but
  required is at least "3" (found /usr/bin/python)

Build with option.

catkin build elevation_mapping_cupy -DPYTHON_EXECUTABLE=$(which python3)

Run

Basic usage.

roslaunch elevation_mapping_cupy elevation_mapping_cupy.launch

Run TurtleBot example

First, install turtlebot simulation.

sudo apt install ros-noetic-turtlebot3*

Then, you can run the example.

export TURTLEBOT3_MODEL=waffle
roslaunch elevation_mapping_cupy turtlesim_example.launch

To control the robot with a keyboard, a new terminal window needs to be opened.
Then run

export TURTLEBOT3_MODEL=waffle
roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch

Velocity inputs can be sent to the robot by pressing the keys a, w, d, x. To stop the robot completely, press s.

Subscribed Topics

  • topics specified in pointcloud_topics in parameters.yaml ([sensor_msgs/PointCloud2])

    The distance measurements.

  • /tf ([tf/tfMessage])

    The transformation tree.

Published Topics

The topics are published as set in the rosparam.
You can specify which layers to publish in which fps.

Under publishers, you can specify the topic_name, layers basic_layers and fps.

publishers:
  your_topic_name:
    layers: ['list_of_layer_names', 'layer1', 'layer2']             # Choose from 'elevation', 'variance', 'traversability', 'time' + plugin layers
    basic_layers: ['list of basic layers', 'layer1']                # basic_layers for valid cell computation (e.g. Rviz): Choose a subset of `layers`.
    fps: 5.0                                                        # Publish rate. Use smaller value than `map_acquire_fps`.

Example setting in config/parameters.yaml.

  • elevation_map_raw ([grid_map_msg/GridMap])

    The entire elevation map.

  • elevation_map_recordable ([grid_map_msg/GridMap])

    The entire elevation map with slower update rate for visualization and logging.

  • elevation_map_filter ([grid_map_msg/GridMap])

    The filtered maps using plugins.

Plugins

You can create your own plugin to process the elevation map and publish as a layer in GridMap message.

Let's look at the example.

First, create your plugin file in elevation_mapping_cupy/script/plugins/ and save as example.py.

import cupy as cp
from typing import List
from .plugin_manager import PluginBase


class NameOfYourPlugin(PluginBase):
    def __init__(self, add_value:float=1.0, **kwargs):
        super().__init__()
        self.add_value = float(add_value)

    def __call__(self, elevation_map: cp.ndarray, layer_names: List[str],
            plugin_layers: cp.ndarray, plugin_layer_names: List[str])->cp.ndarray:
        # Process maps here
        # You can also use the other plugin's data through plugin_layers.
        new_elevation = elevation_map[0] + self.add_value
        return new_elevation

Then, add your plugin setting to config/plugin_config.yaml

example:                                      # Use the same name as your file name.
  enable: True                                # weather to laod this plugin
  fill_nan: True                              # Fill nans to invalid cells of elevation layer.
  is_height_layer: True                       # If this is a height layer (such as elevation) or not (such as traversability)
  layer_name: "example_layer"                 # The layer name.
  extra_params:                               # This params are passed to the plugin class on initialization.
    add_value: 2.0                            # Example param

Finally, add your layer name to publishers in config/parameters.yaml. You can create a new topic or add to existing topics.

  plugin_example:   # Topic name
    layers: ['elevation', 'example_layer']
    basic_layers: ['example_layer']
    fps: 1.0        # The plugin is called with this fps.
Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023
BASH - Biomechanical Animated Skinned Human

We developed a method animating a statistical 3D human model for biomechanical analysis to increase accessibility for non-experts, like patients, athletes, or designers.

Machine Learning and Data Analytics Lab FAU 66 Nov 19, 2022
Source code for paper "Deep Diffusion Models for Robust Channel Estimation", TBA.

diffusion-channels Source code for paper "Deep Diffusion Models for Robust Channel Estimation". Generic flow: Use 'matlab/main.mat' to generate traini

The University of Texas Computational Sensing and Imaging Lab 15 Dec 22, 2022
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 02, 2022
Using Tensorflow Object Detection API to detect Waymo open dataset

Waymo-2D-Object-Detection Using Tensorflow Object Detection API to detect Waymo open dataset Result CenterNet Training Loss SSD ResNet Training Loss C

76 Dec 12, 2022
Modified fork of Xuebin Qin's U-2-Net Repository. Used for demonstration purposes.

U^2-Net (U square net) Modified version of U2Net used for demonstation purposes. Paper: U^2-Net: Going Deeper with Nested U-Structure for Salient Obje

Shreyas Bhat Kera 13 Aug 28, 2022
Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Google Cloud Storage

Keepsake Version control for machine learning. Keepsake is a Python library that uploads files and metadata (like hyperparameters) to Amazon S3 or Goo

Replicate 1.6k Dec 29, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
Log4j JNDI inj. vuln scanner

Log-4-JAM - Log 4 Just Another Mess Log4j JNDI inj. vuln scanner Requirements pip3 install requests_toolbelt Usage # make sure target list has http/ht

Ashish Kunwar 66 Nov 09, 2022
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding

Rot-Pro : Modeling Transitivity by Projection in Knowledge Graph Embedding This repository contains the source code for the Rot-Pro model, presented a

Tewi 9 Sep 28, 2022
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
통일된 DataScience 폴더 구조 제공 및 가상환경 작업의 부담감 해소

Lucas coded by linux shell 목차 Mac버전 CookieCutter (autoenv) 1.How to Install autoenv 2.폴더 진입 시, activate 구현하기 3.폴더 탈출 시, deactivate 구현하기 4.Alias 설정하기 5

ello 3 Feb 21, 2022
Generic Foreground Segmentation in Images

Pixel Objectness The following repository contains pretrained model for pixel objectness. Please visit our project page for the paper and visual resul

Suyog Jain 157 Nov 21, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

78 Dec 27, 2022
Implementation for paper "Towards the Generalization of Contrastive Self-Supervised Learning"

Contrastive Self-Supervised Learning on CIFAR-10 Paper "Towards the Generalization of Contrastive Self-Supervised Learning", Weiran Huang, Mingyang Yi

Weiran Huang 13 Nov 30, 2022
DeepLab2: A TensorFlow Library for Deep Labeling

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

Google Research 845 Jan 04, 2023
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
Ros2-voiceroid2 - ROS2 wrapper package of VOICEROID2

ros2_voiceroid2 ROS2 wrapper package of VOICEROID2 Windows Only Installation Ins

Nkyoku 1 Jan 23, 2022
On-device speech-to-intent engine powered by deep learning

Rhino Made in Vancouver, Canada by Picovoice Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a giv

Picovoice 510 Dec 30, 2022