Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Overview

Isaac ROS Visual Odometry

This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerated library. It takes in a time synced pair of stereo images (grayscale) along with respective camera intrinsics to publish the current pose of the camera relative to its start pose.

Elbrus is based on two core technologies: Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM).

Visual Odometry is a method for estimating a camera position relative to its start position. This method has an iterative nature. At each iteration it considers two consequential input frames (stereo pairs). On both frames, it finds a set of keypoints. Matching keypoints in these two sets gives the ability to estimate the transition and relative rotation of the camera between frames.

Simultaneous Localization and Mapping is a method built on top of the VO predictions. It aims to improve the quality of VO estimations by leveraging the knowledge of previously seen parts of a trajectory. It detects if the current scene was seen in the past (i.e. a loop in camera movement) and runs an additional optimization procedure to tune previously obtained poses.

Along with visual data Elbrus can optionally use Inertial Measurement Unit (IMU) measurements. It automatically switches to IMU when VO is unable to estimate a pose–for example, when there is dark lighting or long solid surfaces in front of a camera.

Elbrus delivers real-time tracking performance: more than 60 FPS for VGA resolution. For the KITTI benchmark, the algorithm achieves a drift of ~1% in localization and an orientation error of 0.003 degrees per meter of motion.

Elbrus allows for robust tracking in various environments and with different use cases: indoor, outdoor, aerial, HMD, automotive, and robotics.

This has been tested on ROS2 (Foxy) and should build and run on x86_64 and aarch64 (Jetson).

System Requirements

This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on Jetson hardware.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64

  • CUDA 10.2+ supported discrete GPU
  • Ubuntu 18.04+

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Docker

Precompiled ROS2 Foxy packages are not available for JetPack 4.6 (based on Ubuntu 18.04 Bionic). You can either manually compile ROS2 Foxy and required dependent packages from source or use the Isaac ROS development Docker image from Isaac ROS Common.

You must first install the Nvidia Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Run the following script in isaac_ros_common to build the image and launch the container:

$ scripts/run_dev.sh <optional path>

You can either provide an optional path to mirror in your host ROS workspace with Isaac ROS packages, which will be made available in the container as /workspaces/isaac_ros-dev, or you can setup a new workspace in the container.

Package Dependencies

Note: isaac_ros_common is used for running tests and/or creating a development container and isaac_ros_image_pipeline is used as an executable dependency in launch files.

Rectified vs Smooth Transform

Elbrus provides the user with two possible modes: visual odometry with SLAM (rectified transform) or pure visual odometry (smooth transform).

Using rectified transform (enable_rectified_pose = true) is more beneficial in cases when the camera path is frequently encircled or entangled. Though rectified transform increases computational resource demands for VO, it usually reduces the final position drift dramatically.

On the other hand, using pure VO (smooth tranform) may be useful on relatively short and straight camera trails. This choice will reduce compute usage and increase working speed without causing essential pose drift.

Coordinate Frames

This section describes the coordinate frames that are involved in the VisualOdomertyNode. The frames discussed below are oriented as follows:

  1. left_camera_frame: The frame associated with left eye of the stereo camera. Note that this is not the same as optical frame.
  2. right_camera_frame: The frame associated with right eye of the stereo camera. Note that this is not the same as optical frame.
  3. imu_frame: The frame associated with the IMU sensor (if available).
  4. fixed_frame: The fixed frame that aligns with the start pose of the stereo camera. The tracked poses are published in the TF tree with respect to this frame.
  5. current_smooth_frame: The moving frame that tracks the current smooth pose of the stereo camera. The poses in this frame represent a smooth continous motion. Note that the frame can drift over time.
  6. current_rectified_frame: The moving frame that tracks the current rectified pose of the stereo camera. The poses in this frame represent an accurate position and orientation. Note that the frame can have sudden jumps.

Quickstart

  1. Create a ROS2 workspace if one is not already prepared:
    mkdir -p your_ws/src
    Note: The workspace can have any name; these steps use the name your_ws.
  2. Clone this package repository to your_ws/src/isaac_ros_visual_odometry. Check that you have Git LFS installed before cloning to pull down all large files.
    sudo apt-get install git-lfs
    cd your_ws/src && git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_odometry
  3. Build and source the workspace:
    cd your_ws && colcon build --symlink-install && source install/setup.bash
  4. (Optional) Run tests to verify complete and correct installation:
    colcon test

This repository comes with pre-configured launch files for a real camera and a simulator.

For RealSense camera

Note: You will need to calibrate the intrinsics of your camera if you want the node to determine the 3D pose of your camera. See here for more details.

  1. Start isaac_ros_visual_odometry using the launch files:
    ros2 launch isaac_ros_visual_odometry isaac_ros_visual_odometry_realsense.launch.py

  2. To connect the RealSense camera's default TF tree to the current_smooth_frame(default value is base_link_smooth), run the following command:
    ros2 run tf2_ros static_transform_publisher 0 0 0 0 0 0 1 base_link_smooth camera_link

  3. In a separate terminal, spin up RViz to see the tracking of the camera:
    rviz2

  4. Add the TF tree in the Displays panel of the RViz.

  5. Set the Fixed frame in the Global Options to the name provided in the fixed_frame variable.

  6. Try moving your camera, and you should see something like this:

  7. If you prefer to observe the Visual Odometry output in a text mode, on a separate terminal echo the contents of the /visual_odometry/tf_stamped topic with the following command:
    ros2 topic echo /visual_odometry/tf_stamped

For Isaac Sim

  1. Make sure you have Isaac Sim set up correctly and choose the appropriate working environment[Native/Docker&Cloud]. For this walkthrough, we are using the native workstation setup for Isaac Sim.

  2. See Running For The First Time section to launch Isaac Sim from the app launcher and click on the Isaac Sim button.

  3. Set up the Isaac Sim ROS2 bridge as described here.

  4. Connect to the Nucleus server as shown in the Getting Started section if you have not done it already.

  5. Open up the Isaac ROS Common USD scene located at:

    omniverse://<your_nucleus_server>/Isaac/Samples/ROS/Scenario/carter_warehouse_apriltags_worker.usd.

    And wait for it to load completly.

  6. Enable the right camera for a stereo image pair. Go to the stage tab and select /World/Carter_ROS/ROS_Camera_Stereo_Right then tick the enabled checkbox.

  7. Press Play to start publishing data from the Isaac Sim application.

  8. In a separate terminal, start isaac_ros_visual_odometry using the launch files:
    ros2 launch isaac_ros_visual_odometry isaac_ros_visual_odometry_isaac_sim.launch.py

  9. In a separate terminal, send the signal to rotate the robot in the sim as follows:
    ros2 topic pub /cmd_vel geometry_msgs/Twist '{linear: {x: 0.0, y: 0.0, z: 0.0}, angular: {x: 0.0,y: 0.0,z: 0.05}}'

  10. In a separate terminal, run RViz to visualize the tracking of the pose of the camera:
    rviz2

  11. Add the TF tree in the Displays RViz panel.

  12. Set the Fixed frame in the Global Options to the name provided in the fixed_frame variable.

  13. You should see something like this:

  14. If you prefer to observe the Visual Odometry output in a text mode, on a separate terminal, echo the contents of the /visual_odometry/tf_stamped topic with the following command:
    ros2 topic echo /visual_odometry/tf_stamped

For Isaac Sim with Hardware in the loop (HIL)

The following instructions are for a setup where we can run the sample on a Jetson device and Isaac Sim on an x86 machine. We will use the ROS_DOMAIN_ID environment variable to have a separate logical network for Isaac Sim and the sample application.

NOTE: Before executing any of the ROS commands, make sure to set the ROS_DOMAIN_ID variable first.

  1. Complete step 5 of For Isaac Sim section if you have not done it already.

  2. Open the location of the Isaac Sim package in the terminal by clicking the Open in Terminal button.

  3. In the terminal opened by the previous step, set the ROS_DOMAIN_ID as shown:

    export ROS_DOMAIN_ID=<some_number>

  4. Launch Isaac Sim from the script as shown:

    ./isaac-sim.sh

  5. Continue with step 7 of For Isaac Sim section. Make sure to set the ROS_DOMAIN_ID variable before running the sample application.

Package Reference

isaac_ros_visual_odometry

Available Components

Component Topics Subscribed Topics Published Parameters
VisualOdometryNode /stereo_camera/left/image: The image from the left eye of the stereo camera in grayscale
/stereo_camera/left/camera_info: CameraInfo from the left eye of the stereo camera
/stereo_camera/right/image: The image from the right eye of the stereo camera in grayscale
/stereo_camera/right/camera_info: CameraInfo from the right eye of the stereo camera
visual_odometry/imu: Sensor data from IMU(optional).
/visual_odometry/tf_stamped: A consolidated message that provies information about the tracked poses. It consists of two poses: smooth_transform is calculated from pure visual odometry; rectified_transform is calculated with SLAM enabled. It also gives the states of both poses.
See more details below.
enable_rectified_pose: If enabled, rectified_transform will be populated in the message. It is enabled by default.
denoise_input_images: If enabled, input images are denoised. It can be enabled when images are noisy because of low-light conditions. It is disabled by default.
rectified_images: Flag to mark if the images coming in the subscribed topics are rectified or raw. It is enabled by default.
enable_imu: If enabled, IMU data is used. It is disabled by default.
enable_debug_mode: If enabled, a debug dump(image frames, timestamps, and camera info) is saved on to the disk at the path indicated by debug_dump_path. It is disabled by default
debug_dump_path: The path to the directory to store the debug dump data. The default value is /tmp/elbrus.
gravitational_force: The initial gravity vector defined in the odom frame. If the IMU sensor is not parallel to the floor, update all the axes with appropriate values. The default value is {0.0, 0, -9.8}.
left_camera_frame: The name of the left camera frame. The default value is left_cam.
right_camera_frame: The name of the right camera frame. The default value is right_cam.
imu_frame: The name of the imu frame. The default value is imu.
fixed_frame: The name of the frame where odomerty or poses are tracked. This is a fixed frame. The default value is odom.
current_smooth_frame: The name of the frame where smooth_transform is published. This is a moving frame. The default value is base_link_smooth.
current_rectified_frame: The name of the frame where rectified_transform is published. This is a moving frame. The default value is base_link_rectified.

Launch files

Name Description
isaac_ros_visual_odometry.launch.py Launch file to bring up visual odometry node standalone.
isaac_ros_visual_odometry_isaac_sim.launch.py Launch file which brings up visual odometry node configured for Isaac Sim.
isaac_ros_visual_odometry_realsense.launch.py Launch file which brings up visual odometry node configured for RealSense.
isaac_ros_visual_odometry_zed2.launch.py Launch file which brings up visual odometry node remapped for Zed2.

visual_odometry/tf_stamped


# The frame id in the header is used as the reference frame of both the transforms below.
std_msgs/Header header

# The frame id of the child frame to which the transforms point.
string child_frame_id

# Translation and rotation in 3-dimensions of child_frame_id from header.frame_id.
# rectified_tranform represents the most accurate position and orientation.
# It shall be consumed if robust tracking is needed. Note: It can result in sudden jumps.
geometry_msgs/Transform rectified_transform

# smooth_transform represents the position and orientation which corresponds to a smooth continous motion.
# It shall be consumed if using as another signal for e.g. in case of sensor fusion.
# Note: It might drift over time  
geometry_msgs/Transform smooth_transform

# Pure visual odometry return code:
# 0 - Unknown state
# 1 - Success
# 2 - Failed
# 3 - Success but invalidated by IMU
uint8 vo_state

# Integrator status:
# 0 - Unknown state
# 1 - Static
# 2 - Inertial
# 3 - IMU
uint8 integrator_state

Troubleshooting

  • If RViz is not showing the poses, check the Fixed Frame value.
  • If you are seeing Tracker is lost. messages frequently, it could be caused by the following issues:
    • Fast motion causing the motion blur in the frames.
    • Low-lighting conditions
    • The wrong camerainfo is being published
  • For better performance:
    • Increase the capture framerate from the camera to yield a better tracking result.
    • If input images are noisy, you can use the denoise_input_images flag in the node.

Updates

Date Changes
2021-11-15 Isaac Sim HIL documentation update
2021-10-20 Initial release
Comments
  • Test errors

    Test errors

    Hello. Thanks for sharing your code.

    I've tried to install this isaac slam repo using ros buils. I've installed foxy and required components. But after colcon build, when i'm trying to run colcon test, i'm getting this.

    Starting >>> isaac_ros_common
    Starting >>> isaac_ros_test
    Starting >>> isaac_ros_nvengine_interfaces
    Starting >>> isaac_ros_visual_slam_interfaces
    Finished <<< isaac_ros_nvengine_interfaces [2.24s]                                                                            
    Finished <<< isaac_ros_visual_slam_interfaces [2.24s]
    Finished <<< isaac_ros_common [2.98s]                                              
    --- stderr: isaac_ros_test                     
    
    =============================== warnings summary ===============================
    ../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
      Warning: The (fspath: py.path.local) argument to Package is deprecated. Please use the (path: pathlib.Path) argument instead.
      See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path
    
    ../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:146
    ../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:146
      Warning: <class 'launch_testing_ros.pytest.hooks.LaunchROSTestModule'> is not using a cooperative constructor and only takes {'parent', 'fspath'}.
      See https://docs.pytest.org/en/stable/deprecations.html#constructors-of-custom-pytest-node-subclasses-should-take-kwargs for more details.
    
    ../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
    ../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
      Warning: The (fspath: py.path.local) argument to LaunchROSTestModule is deprecated. Please use the (path: pathlib.Path) argument instead.
      See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path
    
    ../../../../../../opt/ros/foxy/lib/python3.8/site-packages/ament_flake8/main.py:26
    ../../../../.local/lib/python3.8/site-packages/setuptools/_distutils/version.py:346
    test/test_flake8.py::test_flake8
    test/test_flake8.py::test_flake8
    test/test_pep257.py::test_pep257
    test/test_pep257.py::test_pep257
      Warning: distutils Version classes are deprecated. Use packaging.version instead.
    
    ../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
    ../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
    ../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
      Warning: The (fspath: py.path.local) argument to Module is deprecated. Please use the (path: pathlib.Path) argument instead.
      See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path
    
    -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
    ---
    Finished <<< isaac_ros_test [15.7s]
    Starting >>> isaac_ros_image_proc
    Starting >>> isaac_ros_stereo_image_proc
    Starting >>> isaac_ros_nvengine                                                                      
    Finished <<< isaac_ros_nvengine [11.7s]                                                                                             
    --- stderr: isaac_ros_stereo_image_proc                                                                
    Errors while running CTest
    Output from these tests are in: /home/daddywesker/ros2_ws/build/isaac_ros_stereo_image_proc/Testing/Temporary/LastTest.log
    Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
    ---
    
    Finished <<< isaac_ros_stereo_image_proc [25.8s]	[ with test failures ]
    --- stderr: isaac_ros_image_proc                     
    Errors while running CTest
    Output from these tests are in: /home/daddywesker/ros2_ws/build/isaac_ros_image_proc/Testing/Temporary/LastTest.log
    Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
    ---
    Finished <<< isaac_ros_image_proc [35.1s]	[ with test failures ]
    Starting >>> isaac_ros_image_pipeline
    Starting >>> isaac_ros_visual_slam
    Finished <<< isaac_ros_image_pipeline [1.45s]                                                      
    Finished <<< isaac_ros_visual_slam [27.5s]                 
    
    Summary: 9 packages finished [1min 19s]
      3 packages had stderr output: isaac_ros_image_proc isaac_ros_stereo_image_proc isaac_ros_test
      2 packages had test failures: isaac_ros_image_proc isaac_ros_stereo_image_proc
    

    I was able to move further by installing

    sudo apt-get install ros-foxy-stereo-image-proc (probably you should add it to the requirements?)

    But I'm still have this:

    
    Summary: 9 packages finished [1min 20s]
      2 packages had stderr output: isaac_ros_image_proc isaac_ros_test
      1 package had test failures: isaac_ros_image_proc
    

    Here is the log file for isaac_ros_image_proc

    Currently, i cant beat it.

    opened by DaddyWesker 21
  • Could not find the resource 'realsense2_camera

    Could not find the resource 'realsense2_camera

    Hello,

    Trying to run the example with D435 but it is not found

    Resloved: We have to run git lfs pull in each Isaac ROS repository Just spent 6 hours finding out... :-(

    PLEASE ADD TO THE INSTALLATION

    opened by patrickpoirier51 16
  • release update for Jetson JP5.0/ zed launchfile

    release update for Jetson JP5.0/ zed launchfile

    it won't build systemwide on just flashed 20.04 OS with just installed Ros Foxy?

    will it?

    It seems building of packages would stuck on opencv pre-requisite


    will there be a zed launchfile?

    errors

    --- stderr: isaac_ros_image_proc             
    In file included from /home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:33:
    /home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp: In function ‘cv::Mat {anonymous}::GetConvertedMat(VPIImageImpl*&, VPIImageImpl*&, VPIStreamImpl*&, const cv::Mat&, uint32_t, std::string, std::string)’:
    /home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:68:5: error: ‘vpiImageCreateOpenCVMatWrapper’ was not declared in this scope; did you mean ‘vpiImageCreateWrapper’?
       68 |     vpiImageCreateOpenCVMatWrapper(
          |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /home/nvidia/zed_ws/install/isaac_ros_common/include/isaac_r
    

    more errs

    /usr/include/opencv4/opencv2/core/mat.hpp:826:5: note:   candidate expects 2 arguments, 5 provided
    /usr/include/opencv4/opencv2/core/mat.hpp:818:5: note: candidate: ‘cv::Mat::Mat(int, int, int)’
      818 |     Mat(int rows, int cols, int type);
          |     ^~~
    /usr/include/opencv4/opencv2/core/mat.hpp:818:5: note:   candidate expects 3 arguments, 5 provided
    /usr/include/opencv4/opencv2/core/mat.hpp:810:5: note: candidate: ‘cv::Mat::Mat()’
      810 |     Mat() CV_NOEXCEPT;
          |     ^~~
    /usr/include/opencv4/opencv2/core/mat.hpp:810:5: note:   candidate expects 0 arguments, 5 provided
    make[2]: *** [CMakeFiles/image_format_converter_node.dir/build.make:63: CMakeFiles/image_format_converter_node.dir/src/image_format_converter_node.cpp.o] Error 1
    make[1]: *** [CMakeFiles/Makefile2:168: CMakeFiles/image_format_converter_node.dir/all] Error 2
    make: *** [Makefile:141: all] Error 2
    

    Thanks AV

    opened by AndreV84 11
  • VSLAM got poor performance tracker is lost

    VSLAM got poor performance tracker is lost

    Hi I'm using the VSLAM as a state estimator in nvblox with D435i as camera, x86 with nvidia GPU(Ubuntu 20.04.5LTS, ROS2 foxy) and using realsense vslam examples in launch folder of VSLAM and https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md

    but it seems after DP2.0 release update, the VSLAM always show "tracker is lost"

    image

    there are very few feature points, but as I use VSLAM release DP1.2, there are many feature point output in rviz

    I've followed the trouble shooting suggestions:

    Fast motion causing the motion blur in the frames.
    Low-lighting conditions.
    The wrong camerainfo is being published.
    
    

    my stereo input is 60fps, and I use realsense D435i and splitted the camera image so there are no light dots in it As to the camera info, I'm not sure what is the "right" camera info, what I only knew is the camera info can be seen via rostopic echo.

    What I want to ask is that is there any difference for DP2.0 to DP1.2 which would affect the performance?

    and is there any ways to tune the VSLAM aside from the trouble shooting? Or for the elbrus VSLAM, can I tune configurations such as feature point number like ORBSLAM3?

    And is that involving D435’s IMU worth a try?

    Thanks.

    verify to close 
    opened by chivas1000 10
  •  isaac_ros_visual_slam as simulation return [CUDA] failure

    isaac_ros_visual_slam as simulation return [CUDA] failure

    Hi, I am working with isaac_ros_visual_slam package and Gazebo. Screenshot from 2022-09-05 15-08-04

    My error: [component_container-1] [INFO] [1662379434.366503061] [visual_slam_node]: Use use_gpu: true [component_container-1] [ERROR] 140409917632320 [CUDA] failure: 13, file: /home/akorovko/Code/elbrus/src/modules/cuda_modules/src/lk_tracker.cpp, line: 9

    Also, i use pytorch environment with cuda.

    Thanks for reply.

    wontfix 
    opened by muratkoc503 10
  • Working with Isaac Sim does not work well

    Working with Isaac Sim does not work well

    I walked through the following link instruction

    • https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam#working-with-isaac-sim

    But it does not work well

    ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_isaac_sim.launch.py
    

    The following error happened. And I did not confirm the ros message.

    [INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-04-30-04-43-54-457934-omn-a40-ubuntu-2718564
    [INFO] [launch]: Default logging verbosity is set to INFO
    [INFO] [component_container-1]: process started with pid [2718584]
    [component_container-1] [INFO] [1651293834.889712235] [visual_slam_launch_container]: Load Library: /workspaces/isaac_ros-dev/ros_ws2/install/isaac_ros_visual_slam/lib/libvisual_slam_node.so
    [ERROR] [component_container-1]: process has died [pid 2718584, exit code -4, cmd '/opt/ros/foxy/lib/rclcpp_components/component_container --ros-args -r __node:=visual_slam_launch_container -r __ns:=/'].
    

    Could you tell me how to solve this problem?

    verify to close 
    opened by SnowMasaya 8
  • Could not find a package configuration file provided by

    Could not find a package configuration file provided by "vpi"

    Using Ubuntu 20.04.4, and ROS2 Foxy. Following the steps in the readme, but build fails with this error:

    colcon build --symlink-install
    Starting >>> isaac_ros_common
    Starting >>> isaac_ros_test
    Starting >>> isaac_ros_nvengine_interfaces
    Starting >>> isaac_ros_visual_slam_interfaces
    Finished <<< isaac_ros_test [0.93s]                                  
    --- stderr: isaac_ros_common                                         
    CMake Error at CMakeLists.txt:31 (find_package):
      By not providing "Findvpi.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "vpi", but
      CMake did not find one.
    
      Could not find a package configuration file provided by "vpi" with any of
      the following names:
    
        vpiConfig.cmake
        vpi-config.cmake
    
      Add the installation prefix of "vpi" to CMAKE_PREFIX_PATH or set "vpi_DIR"
      to a directory containing one of the above files.  If "vpi" provides a
      separate development package or SDK, be sure it has been installed.
    
    
    
    opened by MadlyFX 6
  • Can't Access RealSense D435i

    Can't Access RealSense D435i

    I am running the Docker container from isaac_ros_common on Ubuntu 20.04 on my desktop not on a jetson. When running the launch file I get failed to set power state errors.

    ros2 launch isaac_ros_visual_odometry isaac_ros_visual_odometry_realsense.launch.py [INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-01-08-00-40-09-432131-pcmamba-2323 [INFO] [launch]: Default logging verbosity is set to INFO /workspaces/isaac_ros-dev/install/isaac_ros_visual_odometry/share/isaac_ros_visual_odometry/launch/isaac_ros_visual_odometry_realsense.launch.py:16: UserWarning: The parameter 'node_executable' is deprecated, use 'executable' instead realsense_camera_node = Node( [INFO] [component_container-1]: process started with pid [2336] [INFO] [realsense2_camera_node-2]: process started with pid [2338] [realsense2_camera_node-2] [INFO] [1641602409.565854679] [RealSenseCameraNode]: RealSense ROS v3.2.3 [realsense2_camera_node-2] [INFO] [1641602409.565879440] [RealSenseCameraNode]: Built with LibRealSense v2.50.0 [realsense2_camera_node-2] [INFO] [1641602409.565883781] [RealSenseCameraNode]: Running with LibRealSense v2.50.0 [realsense2_camera_node-2] [WARN] [1641602409.575876964] [RealSenseCameraNode]: Device 1/1 failed with exception: failed to set power state [realsense2_camera_node-2] [ERROR] [1641602409.575892514] [RealSenseCameraNode]: The requested device with is NOT found. Will Try again. [component_container-1] [INFO] [1641602409.757698050] [visual_odometry_launch_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_visual_odometry/lib/libvisual_odometry_node.so [component_container-1] [INFO] [1641602409.789745607] [visual_odometry_launch_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::visual_odometry::VisualOdometryNode> [component_container-1] [INFO] [1641602409.789775598] [visual_odometry_launch_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::visual_odometry::VisualOdometryNode> [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/visual_odometry_node' in container '/visual_odometry_launch_container' [realsense2_camera_node-2] [WARN] [1641602415.585547627] [RealSenseCameraNode]: Device 1/1 failed with exception: failed to set power state [realsense2_camera_node-2] [ERROR] [1641602415.585565458] [RealSenseCameraNode]: The requested device with is NOT found. Will Try again.

    opened by Brac24 6
  • it won't build on Jetson JP4.6.1 but for isaac common script

    it won't build on Jetson JP4.6.1 but for isaac common script

    -- stderr: isaac_ros_nvengine                                                                                                
    /usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46_1/core/libgxf_core.so: file format not recognized; treating as linker script
    /usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46_1/core/libgxf_core.so:1: syntax error
    collect2: error: ld returned 1 exit status
    make[2]: *** [libgxe_node.so] Error 1
    make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
    make: *** [all] Error 2
    ---
    Failed   <<< isaac_ros_nvengine [22.3s, exited with code 2]
    

    after performing the isaac common image script I added src then cloned the repository; then executed colcon build it won't work

    opened by AndreV84 5
  • Support for RGB-D inputs instead of stereo pair

    Support for RGB-D inputs instead of stereo pair

    Overview

    It would be very helpful for our project if Elbrus could support RGB-D inputs in addition to stereo pairs. We would very much like to leverage this package for GPU-accelerated VO, but we depend on the IR emitter for our RealSense D455, rendering the IR stereo pair unusable. A slightly less desirable but simpler alternative would be for this package to support monocular VO/SLAM, which appears to be supported by Elbrus.

    Use case details

    Our solution leverages the RealSense D455 as our primary vision sensor. The RealSense ROS node publishes an RGB image, depth image, and point cloud. All of these topics are utilized by various other portions of our solution based on specific needs. Additionally, and critically, we need to enable the IR projector on the RealSense to improve fidelity of the disparity map for our application.

    Our custom SLAM solution leverages factor graphs with GTSAM using a variety of standard and custom factor types, including wheel odometry, AprilTags, and floor lines. We do not currently support operation in non-structured environments, but this is a gap that Elbrus should hopefully be able to help fill. The VO output could trivially be added to the factor graph along with other existing factor types, improving overall robustness and expanding supported environments. We have verified that this indeed works in practice when disabling the IR emitter and passing in the IR stereo pair to this package's node.

    Feature request

    With the IR emitter enabled, the projected dot array shows up very clearly in both the left and right IR images. Since the projection moves with the camera, it can give the appearance of no motion between consecutive frames, rendering VO useless. (This is presumably why the option is turned off in the sample launch file.) There are several other potential solutions worth considering.

    RGB-D inputs

    Since we are currently unable to turn off the IR emitter, it would be preferable to have an alternate interface for Elbrus that would allow for providing an RGB image along with a depth map. In cases where an existing depth map is not available, Elbrus clearly has performance advantages for operating directly on distorted images with sparse feature points. However, if a depth map is already available, it would seem advantageous to use it. I don't know the implementation details for Elbrus, but I know this should be possible in a number of ways with varying levels of integration.

    Mono VO/SLAM

    As follow-up, when looking into the interfaces for the Elbrus library, I noted the following for ELBRUS_Track():

     * images - is a pointer to single image in case of mono or
     * array of two images in case of stereo.
    

    In the shorter term, would it be possible to add an additional option to run this node in monocular mode with a single image topic? I've only seen reference to benchmarks performed with stereo cameras, so I'm not sure of the performance implications and extent of support for running mono.


    Thank you for the support!

    verify to close 
    opened by grahamfletcher-ms 4
  • colcon build error

    colcon build error

    Hi I'm using the VSLAM, but got this error when colcon building:

    [email protected]:/workspaces/isaac_ros-dev$ colcon build --symlink-install Starting >>> nvblox_msgs Starting >>> realsense2_camera_msgs Starting >>> nvblox Starting >>> isaac_ros_test Starting >>> isaac_ros_visual_slam_interfaces Starting >>> nvblox_cpu_gpu_tools
    Starting >>> nvblox_performance_measurement_msgs Starting >>> isaac_ros_apriltag_interfaces Starting >>> isaac_ros_bi3d_interfaces Starting >>> isaac_ros_common Starting >>> isaac_ros_tensor_list_interfaces
    Starting >>> nvblox_isaac_sim Finished <<< nvblox_cpu_gpu_tools [3.66s]
    Finished <<< nvblox_isaac_sim [3.61s]
    Finished <<< isaac_ros_test [3.82s] Finished <<< isaac_ros_bi3d_interfaces [14.3s]
    Finished <<< isaac_ros_apriltag_interfaces [15.2s]
    Finished <<< realsense2_camera_msgs [15.3s]
    Starting >>> realsense2_camera Starting >>> realsense_splitter Starting >>> realsense2_description Finished <<< isaac_ros_tensor_list_interfaces [15.6s]
    Finished <<< isaac_ros_common [15.7s]
    Finished <<< nvblox_performance_measurement_msgs [15.8s] Finished <<< nvblox_msgs [16.2s]
    Starting >>> nvblox_nav2 Starting >>> nvblox_rviz_plugin Finished <<< realsense2_description [2.10s]
    Finished <<< isaac_ros_visual_slam_interfaces [21.3s]
    Starting >>> isaac_ros_visual_slam --- stderr: isaac_ros_visual_slam
    CMake Error at /opt/ros/humble/install/share/ament_cmake_core/cmake/index/ament_index_get_resource.cmake:62 (message): ament_index_get_resource() called with not existing resource ('elbrus' 'isaac_ros_nitros') Call Stack (most recent call first): CMakeLists.txt:50 (ament_index_get_resource)


    Failed <<< isaac_ros_visual_slam [8.06s, exited with code 1] Aborted <<< nvblox_nav2 [29.0s]
    Aborted <<< realsense2_camera [35.9s]
    Aborted <<< nvblox_rviz_plugin [35.2s]
    Aborted <<< realsense_splitter [43.0s]
    Aborted <<< nvblox [2min 47s]

    Summary: 12 packages finished [2min 48s] 1 package failed: isaac_ros_visual_slam 5 packages aborted: nvblox nvblox_nav2 nvblox_rviz_plugin realsense2_camera realsense_splitter 1 package had stderr output: isaac_ros_visual_slam 4 packages not processed [email protected]:/workspaces/isaac_ros-dev$

    Problem reproduction: I followed the https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox and https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md which needs to install nvblox and vslam I think both of my branch is DP2.0 since I reinstall these today.

    I did build successfully before I reinstall the system(x8664, Ubuntu20.04.5LTS ROS2 foxy outside the container), so I used git clone --depth=1 to retrive the last commit, but it didn't work either.

    Is there anything go wrong?

    documentation 
    opened by chivas1000 4
  • How to connect with my ZED Stereo Camera

    How to connect with my ZED Stereo Camera

    I have tested with basics shown in the tutorial. Now I want to run the Visual_Slam with my ZED Stereo Camera. How can I connect it to the camera and run it. May be it is very basic question, I'm just starting it, that's why don't know much stuff yet!

    documentation verify to close 
    opened by Shahrullo 4
  • Quickstart doesnt work on Ubuntu 22.04

    Quickstart doesnt work on Ubuntu 22.04

    Following the quickstart guide I get the following (related ?) error messages.

    When running ./scripts/run_dev.sh I get:

    docker: Error response from daemon: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/moby/21f1770a617988ea5c6e1cc787167812ac103ea7c8541681c11b0f8b05c87252/log.json: no such file or directory): exec: "nvidia-container-runtime": executable file not found in $PATH: <nil>: unknown. ~/Workspace/isaac_ros-dev/src/isaac_ros_common

    Ignoring this and trying colcon build I get:

    CMake Error at CMakeLists.txt:31 (find_package):
      By not providing "Findvpi.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "vpi", but
      CMake did not find one.
    
      Could not find a package configuration file provided by "vpi" with any of
      the following names:
    
        vpiConfig.cmake
        vpi-config.cmake
    
      Add the installation prefix of "vpi" to CMAKE_PREFIX_PATH or set "vpi_DIR"
      to a directory containing one of the above files.  If "vpi" provides a
      separate development package or SDK, be sure it has been installed.
    
    
    
    documentation verify to close 
    opened by geoeo 3
  • Is libelbrus.so library closed source?

    Is libelbrus.so library closed source?

    Hi Team,

    Thank you for open-sourcing this package. In some workshops related to nvidia visual slam, I was under the impression that Nvidia Elbrus is open-source, but we noticed the backend main library libelbrus.so is still closed source.

    I was wondering if the library itself could be open-source as well. We would like to contribute, make some modifications, add some features, and use it in our production robot autonomy stack or if it is already open-sourced I appreciate it if you can point me to it.

    Thank you, Ali

    enhancement 
    opened by jahaniam 1
  • "Tracker is lost" on KITTI rosbag

        I prepared the KITTI dataset in the same way as @DaddyWesker  and ran the sample code. However, I frequently get the message "Tracker is lost". rviz confirmed that there are no sudden jumps in the image sequence.
    

    Originally posted by @Corufa in https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/43#issuecomment-1336357525

    needs info 
    opened by Corufa 2
  • Is this sufficiently robust for long-term and/or large scale use?

    Is this sufficiently robust for long-term and/or large scale use?

    If so, I'd be happy to have some documentation about working with Nav2 + VSLAM in our tutorials: https://navigation.ros.org/tutorials/index.html. However, its important that its suitable for Nav2 users for practical applications!

    enhancement 
    opened by SteveMacenski 4
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
A check for whether the dependency jobs are all green.

alls-green A check for whether the dependency jobs are all green. Why? Do you have more than one job in your GitHub Actions CI/CD workflows setup? Do

Re:actors 33 Jan 03, 2023
PyTorch Implementation for AAAI'21 "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"

UMS for Multi-turn Response Selection Implements the model described in the following paper Do Response Selection Models Really Know What's Next? Utte

Taesun Whang 47 Nov 22, 2022
YOLOX + ROS(1, 2) object detection package

YOLOX + ROS(1, 2) object detection package

Ar-Ray 158 Dec 21, 2022
X-VLM: Multi-Grained Vision Language Pre-Training

X-VLM: learning multi-grained vision language alignments Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. Yan Zeng, Xi

Yan Zeng 286 Dec 23, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | 简体中文 This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
Unsupervised clustering of high content screen samples

Microscopium Unsupervised clustering and dataset exploration for high content screens. See microscopium in action Public dataset BBBC021 from the Broa

60 Dec 05, 2022
Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper

Continual Learning With Filter Atom Swapping Pytorch Implementation of Continual Learning With Filter Atom Swapping (ICLR'22 Spolight) Paper If find t

11 Aug 29, 2022
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
This repository provides the code for MedViLL(Medical Vision Language Learner).

MedViLL This repository provides the code for MedViLL(Medical Vision Language Learner). Our proposed architecture MedViLL is a single BERT-based model

SuperSuperMoon 39 Jan 05, 2023
DECAF: Deep Extreme Classification with Label Features

DECAF DECAF: Deep Extreme Classification with Label Features @InProceedings{Mittal21, author = "Mittal, A. and Dahiya, K. and Agrawal, S. and Sain

46 Nov 06, 2022
Awesome Weak-Shot Learning

Awesome Weak-Shot Learning In weak-shot learning, all categories are split into non-overlapped base categories and novel categories, in which base cat

BCMI 162 Dec 30, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit c

Monash Green AI Lab 51 Dec 10, 2022
A working implementation of the Categorical DQN (Distributional RL).

Categorical DQN. Implementation of the Categorical DQN as described in A distributional Perspective on Reinforcement Learning. Thanks to @tudor-berari

Florin Gogianu 98 Sep 20, 2022
The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

MOTIF Dataset The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled

Booz Allen Hamilton 112 Dec 13, 2022
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples This repository is the official implementation of "Tow

Sungyoon Lee 4 Jul 12, 2022
PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

ERTIS Research Group 7 Aug 01, 2022
A really easy-to-use and powerful sudoku solver.

SodukuSolver This is a really useful sudoku solver with a Qt gui. USAGE Enter the numbers in and click "RUN"! If you don't want to wait, simply press

Ujhhgtg Teams 11 Jun 02, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022