Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Overview

Isaac ROS DNN Inference

Overview

This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom models. One node uses the TensorRT SDK, while the other uses the Triton SDK.

TensorRT is a library that enables faster inference on NVIDIA GPUs; it provides an API for the user to load and execute inference with their own models. The TensorRT ROS2 node in this package integrates this TensorRT API directly, so there is no need to make any calls to or directly use TensorRT SDK. Instead, users simply configure the TensorRT node with their own custom models and parameters, and the node will make the necessary TensorRT API calls to load and execute the model. For further documentation on TensorRT, refer to their main page here.

Triton is a framework that brings up a generic inference server that a user can configure with a model repository, which is a collection of various types of models (e.g.) ONNX Runtime, TensorRT Engine Plan, TensorFlow, PyTorch). A brief tutorial on how to set up a model repository is included below, and further documentation on Triton is also available at the Triton GitHub.

The decision between TensorRT and Triton is ultimately up to user preference. Since TensorRT has fewer configuration steps (i.e. it does not require a model repository), generally you can get started faster with TensorRT. However, the TensorRT node only supports ONNX and TensorRT Engine Plans, while the Triton node supports a wider variety of model types. In terms of performance and inference speed, they are both comparable in our benchmarks.

The user configures either node to load a specified model or (in the case of the Triton SDK) model repository. The nodes expect as input a ROS2 TensorList message and publish the inference result as a ROS2 TensorList message. The definiton of the TensorList message (and the Tensor message contained within it) is specified under isaac_ros_common/isaac_ros_nvengine_interfaces/msg. Users are expected to run their own models, which they have trained (and converted to a compatible model format such as ONNX), or downloaded from NGC (in ETLT format), and converted to a TensorRT Engine File using the TAO converter tool. When running the TensorRT node, it is generally a better practice to first convert your custom model into a TensorRT Engine Plan file using the TAO converter before running inference. If an ONNX model is directly provided, the TensorRT node will convert it to a TensorRT Engine Plan file first before running inference, which will extend the initial setup time of the node.

Native model support is provided as a separate ROS packages with nodes that can accept an input image message and output a tensor message result. The following packages provide additional native model support with useful walkthroughs and documentation on how to use them.

Both nodes will require a pre-processor (encoder) and post-processor (decoder) node. A pre-processor node should take a ROS2 message, perform the pre-processing steps dictated by the model, and then convert it to a ROS2 TensorList message. For example, a pre-processor node could resize an image, normalize the image, and then perform the message conversion. On the other hand, a post-processor node should be used to convert the output of the model inference into a usable form. For example, a post-processor node may perform argmax to identify the class label from a classification problem. The specific functionality of these two nodes are application-specific.

Using TensorRT or Triton

This has been tested on ROS2 (Foxy) and should build and run on x86_64 and aarch64 (Jetson).

For more documentation on TensorRT, see here. Note that the TensorRT node integrates the TensorRT API directly, so there is no need to make any calls or direct usage of TensorRT SDK.

For more documentation on Triton, see here.

System Requirements

This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on Jetson hardware, in addition to on x86 systems with an Nvidia GPU. On x86 systems, packages are only supported when run in the provided Isaac ROS Dev Docker container.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64 (in Isaac ROS Dev Docker Container)

  • CUDA 11.1+ supported discrete GPU
  • VPI 1.1.11
  • Ubuntu 20.04+

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Docker

You need to use the Isaac ROS development Docker image from Isaac ROS Common, based on the version 21.08 image from Deep Learning Frameworks Containers.

You must first install the NVIDIA Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Run the following script in isaac_ros_common to build the image and launch the container on x86_64 or Jetson:

$ scripts/run_dev.sh <optional_path>

Dependencies

Setup

  1. Create a ROS2 workspace if one is not already prepared:

    mkdir -p your_ws/src
    

    Note: The workspace can have any name; this guide assumes you name it your_ws.

  2. Clone the Isaac ROS DNN Inference and Isaac ROS Common package repositories to your_ws/src/isaac_ros_dnn_inference. Check that you have Git LFS installed before cloning to pull down all large files:

    sudo apt-get install git-lfs
    
    cd your_ws/src   
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common
    
  3. Start the Docker interactive workspace:

    isaac_ros_common/scripts/run_dev.sh your_ws
    

    After this command, you will be inside of the container at /workspaces/isaac_ros-dev. Running this command in different terminals will attach to the same container.

    Note: The rest of this README assumes that you are inside this container.

  4. Build and source the workspace:

    cd /workspaces/isaac_ros-dev
    colcon build && . install/setup.bash
    

    Note: We recommend rebuilding the workspace each time when source files are edited. To rebuild, first clean the workspace by running rm -r build install log.

  5. (Optional) Run tests to verify complete and correct installation:

    colcon test --executor sequential
    

DNN Models

TensorRT and Triton DNN inference can work with both custom AI models and pre-trained models from the TAO Toolkit hosted on NVIDIA GPU Cloud (NGC). NVIDIA Train, Adapt, and Optimize (TAO) is an AI-model-adaptation platform that simplifies and accelerates the creation of enterprise AI applications and services.

Pre-trained Models on NGC

TAO Toolkit provides NVIDIA pre-trained models for Computer Vision (CV) and Conversational AI applications. More details about pre-trained models are available here. You should be able to leverage these models for inference with the TensorRT and Triton nodes by following steps similar to the ones discussed below.

Download Pre-trained Encrypted TLT Model (.etlt) from NGC

The following steps show how to download models, using PeopleSemSegnet as an example.

  1. From File Browser on the PeopleSemSegnet page, select the model .etlt file in the FILE list. Copy the wget command by clicking ... in the ACTIONS column.

  2. Run the copied command in a terminal to download the ETLT model, as shown in the below example:

    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_v1.0/files/peoplesemsegnet.etlt
    

Convert the Encrypted TLT Model (.etlt) Format to the TensorRT Engine Plan

tao-converter is used to convert encrypted pre-trained models (.etlt) to the TensorRT Engine Plan.
The pre-built tao-converter can be downloaded here.

tao-converter is also included in the ISAAC-ROS docker container:

Platform Compute library Directory inside docker
x86_64 CUDA 11.3 / cuDNN 8.1 / TensorRT 8.0 /opt/nvidia/tao/cuda11.3-trt8.0
Jetson(aarch64) Library from Jetpack 4.6 /opt/nvidia/tao/jp4.6

A symbolic link (/opt/nvidia/tao/tao-converter) is created to use tao-converter across different platforms.
Tip: Use tao-converter -h for more information on using the tool.

Here are some examples for generating the TensorRT engine file using tao-converter:

  1. Generate an engine file for the fp16 data type:

    mkdir -p /workspaces/isaac_ros-dev/models
    /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_1,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e /workspaces/isaac_ros-dev/models/peoplesemsegnet.engine -o softmax_1 peoplesemsegnet.etlt
    

    Note: The information used above, such as the model load key and input dimension, can be retrieved from the PeopleSemSegnet page under the Overview tab. The model input node name and output node name can be found in peoplesemsegnet_int8.txt from File Browser. The output file is specified using the -e option. The tool needs write permission to the output directory.

  2. Generate an engine file for the data type int8:

    mkdir -p /workspaces/isaac_ros-dev/models
    cd /workspaces/isaac_ros-dev/models
    
    # Downloading calibration cache file for Int8.  Check model's webpage for updated wget command.
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_v1.0/files/peoplesemsegnet_int8.txt
    
    # Running tao-converter
    /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_1,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_int8.txt -e /workspaces/isaac_ros-dev/models/peoplesemsegnet.engine -o softmax_1 peoplesemsegnet.etlt
    

    Note: The calibration cache file (specified using the -c option) is required to generate the int8 engine file. For the PeopleSemSegNet model, it is provided in the File Browser tab.

Custom AI Models

Custom user models or models re-trained through TAO Toolkit can be used with TensorRT and Triton DNN inference with additional configuration and encoder/decoder implementations. U-Net models are natively supported, but other model architectures can also be supported with additional work. You can implement nodes that transform and pre-process data into a TensorList msg (some common encoders are provided in isaac_ros_dnn_encoders) and translate the predicted TensorLists back into semantic messages for your graph (for example, a decoder that produces bounding boxes or image masks). To configure a custom model, you will need to specify the input and output bindings of the expected tensors to TensorRT or Triton nodes through parameters.

Triton Inference

Setup Triton Model Repository

There are example models for using the ONNX Runtime backend at /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/test/models/mobilenetv2-1.0_triton_onnx and TensorFlow backend at /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/test/models/simple_triton_tf.

Here is an example of using the TensorRT backend, which uses the PeopleSemSegnet engine file generated from the TAO-Converter section as the model:

  1. Create a models repository:

    mkdir -p /tmp/models/peoplesemsegnet
    
  2. Create a models repository for a version(e.g. 1):

    mkdir -p /tmp/models/peoplesemsegnet/1
    

    Note that this version should match the model_version parameter for the Triton node.

  3. Copy the generated engine file to the model repository and rename it as model.plan:

    cp /workspaces/isaac_ros-dev/models/peoplesemsegnet.engine /tmp/models/peoplesemsegnet/1/model.plan
    
  4. Create a configuration file for this model at path /tmp/models/peoplesemsegnet/config.pbtxt. Note that name has to be the same as the model repository.

    name: "peoplesemsegnet"
    platform: "tensorrt_plan"
    max_batch_size: 0
    input [
      {
        name: "input_1"
        data_type: TYPE_FP32
        dims: [ 1, 3, 544, 960 ]
      }
    ]
    output [
      {
        name: "softmax_1"
        data_type: TYPE_FP32
        dims: [ 1, 544, 960, 2 ]
      }
    ]
    version_policy: {
      specific {
        versions: [ 1 ]
      }
    }
    

Launch Triton Node

  1. Build isaac_ros_triton package:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_triton && . install/setup.bash
    
  2. The example launch file at src/isaac_ros_dnn_inference/isaac_ros_triton/launch/isaac_ros_triton.py loads and runs the mobilenetv2-1.0 model:

    ros2 launch src/isaac_ros_dnn_inference/isaac_ros_triton/launch/isaac_ros_triton.py
    

    Now the Triton node is set up and running. It listens to the topic /tensor_pub and publishes to the topic /tensor_sub.

  3. In a separate terminal, spin up a node that sends tensors to the Triton node:

    your_ws/src/isaac_ros_common/scripts/run_dev.sh your_ws
    . install/setup.bash
    ros2 run isaac_ros_dnn_inference_test run_test_publisher
    

    This test executable is configured to send random tensors with corresponding dimensions to the /tensor_pub topic.

  4. View the output tensors from the Triton node, which should match the output dimensions of mobilenet:

    ros2 topic echo /tensor_sub
    

    Note: that the received tensor has the dimension [1, 1000], while the tensor printed out has a length of 4000 because the the data type being sent is float32, while the tensor data buffer is specified as uint8. This means that each float32 term corresponds to 4 uint8 terms.

TensorRT Inference

Setup ONNX file or TensorRT Engine Plan

  1. TensorRT inference supports a model in either ONNX format or as a TensorRT Engine Plan. Therefore, in order to run inference using the TensorRT node, either convert your model into ONNX format, or convert it into a Engine Plan file for your hardware platform. An example for converting .etlt formatted models from NGC is shown above in the DNN Models section of the README.

  2. The example model mobilenetv2-1.0 will be used by default when using the provided launch file. To use a custom model ONNX or TensorRT Engine file, copy your ONNX or generated plan file into a known location on your filesystem: cp mobilenetv2-1.0.onnx /tmp/model.onnx or cp model.plan /tmp/model.plan.

Run TensorRT Node

  1. Build the isaac_ros_tensor_rt package:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_tensor_rt && . install/setup.bash
    
  2. Start the TensorRT node (the default example ONNX model is mobilenetv2-1.0):

  ros2 launch src/isaac_ros_dnn_inference/isaac_ros_tensor_rt/launch/isaac_ros_tensor_rt.py

Note: If using an ONNX model file, TensorRT node will first generate a TensorRT engine plan file before running inference. The engine_file_path is the location where the TensorRT engine plan file is generated. By default it is set to /tmp/trt_engine.plan.

Note: Generating a TensorRT Engine Plan file takes time initially and it will affect your performance measures. We recommend pre-generating the engine plan file for production use.

  1. Start the TensorRT node (with a custom ONNX model):

    To launch the TensorRT node using a custom model ONNX file, update the following node parameters in the launch file:

    'model_file_path': '<path-to-custom-ONNX-file>'
    

    This will generate a TensorRT Engine Plan at /tmp/trt_engine.plan and then run inference on that model. The user can also specify in the node parameters an engine_file_path to generate the TensorRT Engine Plan in different location.

  2. Start the TensorRT Node (with a custom TensorRT Engine Plan):

    If using a TensorRT Engine Plan file to run inference, the model_file_path will not be used, so an ONNX file does not need to be provided. Instead inference will be run using the plan file provided by the parameter engine_file_path.

    To launch the TensorRT node using a custom TensorRT Engine Plan file, update the following node parameters in the launch file:

    'engine_file_path': '<path-to-custom-trt-plan-file>',
    'force_engine_update': False
    

    By setting force_engine_update to false, the TensorRT node will first attempt to run inference using the provided TensorRT Engine Plan file provided by the engine_file_path parameter. If it fails to read the engine plan file, it will attempt to generate a new plan file using the ONNX file specified in model_file_path. Normally, this means the node will simply fail and exit the program since the default model_file_path is a placeholder value of model.onnx, which presumably does not point to any existing ONNX file. However, if the user happens to specify a valid ONNX file (i.e. the file exists on the filesystem), then that file will be used to generate the engine plan and run inference. Therefore, it is important to not specify any model_file_path when running with a custom TensorRT Engine Plan file.

    Once the TensorRT node is set up, it listens to the topic /tensor_pub and publishes results to the topic /tensor_sub.

  3. In a separate terminal, spin up a node that sends tensors to the TensorRT Node:

    your_ws/src/isaac_ros_common/scripts/run_dev.sh your_ws
    . install/setup.bash
    ros2 run isaac_ros_dnn_inference_test run_test_publisher
    

    This test executable is configured to send random tensors with corresponding dimensions to the /tensor_pub topic.

  4. View the output tensors from the TensorRT node, which should match the output dimensions of mobilenet:

    ros2 topic echo /tensor_sub
    

    Note that the received tensor has the dimension [1, 1000] while the tensor printed out has a length of 4000 because the the data type being sent is float32 while the tensor data buffer is specified as uint8. This means that each float32 term corresponds to 4 uint8 terms.

Package Reference

isaac_ros_tensor_rt

Overview

The isaac_ros_tensor_rt package offers functionality to run inference on any TensorRT compatible model. It directly integrates the TensorRT API and thus does not require the user to develop any additional code to use the TensorRT SDK. You only need to provide a model in ONNX or TensorRT Engine Plan format to the TensorRT node through node options, and then launch the node to run inference. The launched node will run continously and process in real-time any incoming tensors published to it.

Available Components

Component Topics Subscribed Topics Published Parameters
TensorRTNode /tensor_pub: The input tensor stream /tensor_sub: The tensor list of output tensors from the model inference model_file_path: The absolute path to your model file in the local file system (the model file must be .onnx)
engine_file_path: The absolute path to either where you want your TensorRT engine plan to be generated (from your model file) or where your pre-generated engine plan file is located
force_engine_update: If set to true, the node will always try to generate a TensorRT engine plan from your model file and needs to be set to false to use the pre-generated TensorRT engine plan. This parameter is set to true by default.
input_tensor_names: A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names.
input_binding_names: A list of input tensor binding names specified by model
output_tensor_names: A list of tensor names to be bound to specified output binding names
output_binding_names: A list of output tensor binding names specified by model
verbose: If set to true, the node will enable verbose logging to console from the internal TensorRT execution. This parameter is set to true by default.
max_workspace_size: The size of the working space in bytes. The default value is 64MB
dla_core: The DLA Core to use. Fallback to GPU is always enabled. The default setting is GPU only.
max_batch_size: The maximum possible batch size in case the first dimension is dynamic and used as the batch size
enable_fp16: Enables building a TensorRT engine plan file which uses FP16 precision for inference. If this setting is false, the plan file will use FP32 precision. This setting is true by default
relaxed_dimension_check: Ignores dimensions of 1 for the input-tensor dimension check.

isaac_ros_triton

Overview

The isaac_ros_triton package offers functionality to run inference through a native Triton Inference Server. It allows multiple backends (e.g. Tensorflow, PyTorch, TensorRT) and model types. Model repositories and model configuration files need to be set up following the Triton server instructions.

Available Components

Component Topics Subscribed Topics Published Parameters
TritonNode /tensor_pub: The input tensor stream /tensor_sub: The tensor list of output tensors from the model inference storage_type: The tensor allocation storage type for RosBridgeTensorSubscriber. The default value is 1.
model_repository_paths: The absolute paths to your model repositories in your local file system (repositories structure should follow Triton requirements).
model_name: The name of your model. Under model_repository_paths, there should be a directory with this name, and it should align with the model name in the model configuration under this directory.
max_batch_size: The maximum batch size allowed for the model. It should align with the model configuration. The default value is 8.
num_concurrent_requests: The number of requests the Triton server can take at a time. This should be set according to the tensor publisher frequency. The default value is 65535.
input_tensor_names: A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names.
input_binding_names: A list of input tensor binding names specified by model.
output_tensor_names: A list of tensor names to be bound to specified output binding names.
output_binding_names: A list of output tensor binding names specified by model.

isaac_ros_dnn_encoders

Overview

The isaac_ros_dnn_encoders package offers functionality for encoding ROS2 messages into ROS2 Tensor messages, including the ability to resize and normalize the tensor before outputting it. Currently, this package only supports ROS2 Image messages. The tensor output will be a NCHW tensor, where N is the number of batches (this will be 1 since this package targets inference), C is the number of color channels of the image, H is height of the image, and W is the width of the image. Therefore, a neural network that uses this package for preprocessing should support NCHW inputs.

Using isaac_ros_dnn_encoders

This package is not meant to be a standalone package, but serves as a preprocessing step before sending data to TensorRT or Triton. Ensure that the preprocessing steps of your desired network match the preprocessing steps performed by this node. This node is capable of image color space conversion, image resizing and image normalization. To use this node, simply add it to a launch file for your pipeline. The isaac_ros_unet package contains samples.

Available Components

Component Topics Subscribed Topics Published Parameters
DnnImageEncoderNode image: The image that should be encoded into a tensor encoded_tensor: The resultant tensor after converting the image
network_image_width: The image width that the network expects. This will be used to resize the input image width. The default value is 224.
network_image_height: The image height that the network expects. This will be used to resize the input image height. The default value is 224.
network_image_encoding: The image encoding that the network expects. This will be used to convert the color space of the image. This should be either rgb8 (default), bgr8, or mono8.
maintain_aspect_ratio: A flag for the encoder to crop the input image to get the aspect ratio of network_image_width and network_image_height before resizing. The default value is set to False.
center_crop: A flag for the encoder to crop the center of the image if maintain_aspect_ratio is set to True. The default value is set to False.
tensor_name: The name of the input tensor, which is input by default.
network_normalization_type: The type of network normalization that should be performed on the network. This can be either none for no normalization, unit_scaling for normalization between 0 to 1, positive_negative for normalization between -1 to 1, and image_normalization for performing this normalization:
(image / 255 - mean) / standard_deviation
The default value is unit_scaling.
image_mean: If network_normalization_type is set to image_normalization, the mean of the images per channel will be used for this normalization process, which is [0.5, 0.5, 0.5] by default.
image_stddev: If network_normalization_type is set to image_normalization, the standard deviation of the images per channel will be used for this normalization process, which is [0.5, 0.5, 0.5] by default.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting. DnnImageEncoderNode will skew the aspect ratio of input images to the target dimensions.

Walkthroughs

Inference on PeopleSemSegnet using Triton

This walkthrough will run inference on the PeopleSemSegnet from NGC using Triton.

  1. Obtain the PeopleSemSegnet ETLT file. The input dimension should be NCHW and the output dimension should be NHWC that has gone through an activation layer (e.g. softmax). The PeopleSemSegnet model follows this criteria.

    # Create a model repository for version 1
    mkdir -p /tmp/models/peoplesemsegnet/1
    
    # Download the model
    cd /tmp/models/peoplesemsegnet
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_v1.0/files/peoplesemsegnet.etlt
    
  2. Convert the .etlt file to a TensorRT plan file (which defaults to fp32).

    /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_1,1x3x544x960,1x3x544x960,1x3x544x960 -e /tmp/models/peoplesemsegnet/1/model.plan -o softmax_1 peoplesemsegnet.etlt
    

    Note: The TensorRT plan file should be named model.plan.

  3. Clone Isaac ROS Image Segmentation repository into your workspace to make available the isaac_ros_unet package.

    cd /workspaces/isaac_ros-dev/src
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_segmentation
    
  4. Create file /tmp/models/peoplesemsegnet/config.pbtxt with the following content:

    name: "peoplesemsegnet"
    platform: "tensorrt_plan"
    max_batch_size: 0
    input [
      {
        name: "input_1"
        data_type: TYPE_FP32
        dims: [ 1, 3, 544, 960 ]
      }
    ]
    output [
      {
        name: "softmax_1"
        data_type: TYPE_FP32
        dims: [ 1, 544, 960, 2 ]
      }
    ]
    version_policy: {
      specific {
        versions: [ 1 ]
      }
    }
    
  5. Modify the isaac_ros_unet launch file located in /workspaces/isaac_ros-dev/src/isaac_ros_image_segmentation/isaac_ros_unet/launch/isaac_ros_unet_triton.launch.py. You will need to update the following lines as:

    'model_name': 'peoplesemsegnet',
    'model_repository_paths': ['/tmp/models'],
    

    The rest of the parameters are already set for PeopleSemSegnet. If you are using a custom model, these parameters will also need to be modified.

  6. Rebuild and source isaac_ros_unet:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_unet && . install/setup.bash
    
  7. Start isaac_ros_unet using the launch file:

    ros2 launch isaac_ros_unet isaac_ros_unet_triton.launch.py
    
  8. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src 
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  9. In a separate terminal, publish an image to /image using image_publisher. For testing purposes, we recommend using PeopleSemSegnet sample image, which is located here.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_image_segmentation/isaac_ros_unet/test/test_cases/unet_sample/image.jpg --ros-args -r image_raw:=image
    
  10. In another terminal, launch rqt_image_viewer as follows:

ros2 run rqt_image_view rqt_image_view
  1. Inside the rqt_image_view GUI, change the topic to /unet/colored_segmentation_mask to view a colorized segmentation mask. You may also view the raw segmentation, which is published to /unet/raw_segmentation_mask, where the raw pixels correspond to the class labels making it unsuitable for human visual inspection.

These steps can easily be adapted to using TensorRT by referring to the TensorRT inference section and modifying step 4-5.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

If you are interested in using a custom model of the U-Net architecture, please read the analogous steps for configuring DOPE.

To configure the launch file for your specific model, consult earlier documentation that describes each of these parameters. Once again, remember to verify that the preprocessing and postprocessing supported by the nodes fit your models. For example, the model should expect a NCHW formatted tensor, and output a NHWC tensor that has gone through a activation layer (e.g. softmax).

Troubleshooting

Nodes crashed on initial launch reporting shared libraries have a file format not recognized

Many dependent shared library binary files are stored in git-lfs. These files need to be fetched in order for Isaac ROS nodes to function correctly.

Symptoms

/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2

Solution

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

Updates

Date Changes
2021-11-03 Split DOPE and U-Net into separate repositories.
2021-10-20 Initial release
Comments
  • Triton inference very slow

    Triton inference very slow

    I am trying to run inference with triton on a Jetson Xavier AGX. I am using it on MAX-N but only seem to get 1-2 fps on PeopleNet with half precision. I am using the settings and configs in isaac_ros_object_detection and I am running everything in the docker container from isaac_ros_common. I am using an intel realsense camera with 1280x800 images. Building seems to go fine and when launch the node with "ros2 launch isaac_ros_detectnet isaac_ros_detectnet.launch.py" it displays the following information:

    triton_start.txt

    When running it shows:

    [component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/[email protected]: Triton Async Event DONE for index = 1430 [component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/[email protected]: Trying to load inference for index: 1430 [component_container-1] 2022-07-01 13:13:38.271 DEBUG extensions/triton/inferencers/[email protected]: Successfully loaded inference for: 1430 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/[email protected]: Raw Outputs size = 2 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/[email protected]: Batch size output 'output_bbox/BiasAdd' = 1 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/[email protected]: Batch size output 'output_cov/Sigmoid' = 1 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/[email protected]: incomplete_inference_count_ = 0 [component_container-1] 2022-07-01 13:13:38.272 DEBUG extensions/triton/inferencers/triton_inferencer_im[email protected]: Last inference reached; setting Async state to WAIT [component_container-1] 2022-07-01 13:13:39.384 DEBUG extensions/triton/inferencers/[email protected]: input tensor name = input_1 [component_container-1] 2022-07-01 13:13:39.401 DEBUG external/com_nvidia_gxf/gxf/std/s[email protected]: Sending event notification for entity 8

    Everything seems to run but just very slow. For a second we thought it might be running on the CPU because also the --gpus .. is not attached in docker run but that didn't change anything so that doesn't seem to be the case either. Any idea what we are doing wrong and how we can get the reported speed?

    enhancement 
    opened by GEngels 17
  • Full Example for custom model

    Full Example for custom model

    Hi! I was looking at the Quickstart Guide with TensorRT, and It only has a "proof of life". is it possible to add a full example with Ros2 bag images and viewing the inference on the bag? (like seen here https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection with visualizing using rqt_image_view) TIA, Elad.

    verify to close 
    opened by eladpar 6
  • colcon build error: libcudnn.so.8, not found (try using -rpath or -rpath-link)

    colcon build error: libcudnn.so.8, not found (try using -rpath or -rpath-link)

    I am trying to run this on AGX Xavier in docker container. After cloning three repo: isaac_ros_image_segmentation, isaac_ros_common, and Isaac_ros_dnn_inference, when I run colcon build, I get the following error.

    /usr/bin/ld: warning: libcudnn.so.8, needed by /usr/lib/libopencv_dnn.so.4.5.0, not found (try using -rpath or -rpath-link) /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "cudnnDestroyTensor[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "cudnn[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' /usr/lib/libopencv_dnn.so.4.5.0: undefined reference to "[email protected]' collect2: error: ld returned 1 exit status make[2]: *** [dnn_image_encoder] Error 1 make[1]: *** [CMakeFiles/dnn_image_encoder.dir/all] Error 2 make: *** [all] Error 2

    I also tried git lfs pull. But that didn't work. I tried exporting the path of libcudnn.so.8 into the library path, but that is not working as well. What can be the problem here?

    needs info 
    opened by Aki1608 6
  • ros2 topic list does not show /tensor_pub, /tensor_sub, etc.

    ros2 topic list does not show /tensor_pub, /tensor_sub, etc.

    Hi, I tried the ros2 topic list but can not see topics like tensor_pub, tensor_sub, or image_publisher. Even I tried running with export ROS_DOMAIN_ID=1, I can not see the topics. Btw in initial release I can see the topics, but with the latest commit I cann't. Any advice?

    verify to close 
    opened by EricleiKUANG 4
  • error running examples

    error running examples

    Hi, I installed the issac ros from source, without docker, when I run examples, at the moment, trt and unet lauch.py files I get errors of gfx and some missing files.

    ros2 launch ./isaac_ros_tensor_rt.py
    [INFO] [launch]: All log files can be found below /home/imother/.ros/log/2021-11-14-20-14-44-133478-imother-799864
    [INFO] [launch]: Default logging verbosity is set to INFO
    [INFO] [component_container-1]: process started with pid [800292]
    [component_container-1] [INFO] [1636920885.332517312] [isaac_ros_tensor_rt.tensor_rt_container]: Load Library: /home/imother/ros2_isaac_ws/install/isaac_ros_tensor_rt/lib/libtensor_rt_node.so
    [component_container-1] [INFO] [1636920885.337121942] [isaac_ros_tensor_rt.tensor_rt_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
    [component_container-1] [INFO] [1636920885.337256022] [isaac_ros_tensor_rt.tensor_rt_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
    [component_container-1] [INFO] [1636920885.359633408] [tensor_rt]: /home/imother/ros2_isaac_ws/install/isaac_ros_tensor_rt/share/isaac_ros_tensor_rt
    [component_container-1] [INFO] [1636920885.363188241] [tensor_rt]: Creating context
    [component_container-1] 2021-11-14 20:14:45.476 ERROR gxf/std/[email protected]: librmw_cyclonedds_cpp.so: cannot open shared object file: No such file or directory
    [component_container-1] [ERROR] [1636920885.476188550] [tensor_rt]: LoadExtensionManifest Error: GXF_EXTENSION_FILE_NOT_FOUND
    [component_container-1] [ERROR] [1636920885.485840148] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486131765] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486270006] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486320694] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486366358] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486512823] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486570327] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486614040] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486653848] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486693592] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486787736] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486840569] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486925337] [tensor_rt]: GXF Entity find failed
    [component_container-1] [ERROR] [1636920885.486972697] [tensor_rt]: GXF Entity find failed
    [component_container-1] [INFO] [1636920885.487014041] [tensor_rt]: Initializing...
    [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/tensor_rt' in container '/isaac_ros_tensor_rt/tensor_rt_container'
    [component_container-1] [INFO] [1636920885.487970462] [tensor_rt]: Running...
    

    by the way, how to choose the image topic to make the inference?

    opened by FPSychotic 4
  • Error running in ROS galactic libgxf_ros_bridge.so: undefined symbol: _Z16_demangle_symbolPKc

    Error running in ROS galactic libgxf_ros_bridge.so: undefined symbol: _Z16_demangle_symbolPKc

    Hello. My system currently runs in ROS2 Galactic instead of Foxy. I am trying to run the detection script from https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation in the container but am getting an error with libgxf_ros_bridge.so and I suspect it is because I am using galactic instead of foxy. Is there any way I could get access to a different .so file or is there another way to fix this issue?

    ros2 launch /workspaces/isaac_ros-dev/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py 
    [INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-02-15-22-14-04-514228-operator-6066
    [INFO] [launch]: Default logging verbosity is set to INFO
    [INFO] [component_container-1]: process started with pid [6079]
    [component_container-1] [INFO] [1644963245.380737123] [dope_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/lib/libdnn_image_encoder_node.so
    [component_container-1] [INFO] [1644963245.679776047] [dope_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::DnnImageEncoderNode>
    [component_container-1] [INFO] [1644963245.679936212] [dope_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::DnnImageEncoderNode>
    [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/dope_encoder' in container '/dope_container'
    [component_container-1] [INFO] [1644963245.700632235] [dope_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_tensor_rt/lib/libtensor_rt_node.so
    [component_container-1] [INFO] [1644963245.706709553] [dope_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
    [component_container-1] [INFO] [1644963245.706819289] [dope_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::dnn_inference::TensorRTNode>
    [component_container-1] [INFO] [1644963245.714073339] [dope_inference]: /workspaces/isaac_ros-dev/install/isaac_ros_tensor_rt/share/isaac_ros_tensor_rt
    [component_container-1] [INFO] [1644963245.714154812] [dope_inference]: Creating context
    [component_container-1] 2022-02-15 22:14:06.112 ERROR gxf/std/[email protected]: /workspaces/isaac_ros-dev/install/isaac_ros_nvengine/share/isaac_ros_nvengine/gxf/libgxf_ros_bridge.so: undefined symbol: _Z16_demangle_symbolPKc
    [component_container-1] [ERROR] [1644963246.112833164] [dope_inference]: LoadExtensionManifest Error: GXF_EXTENSION_FILE_NOT_FOUND
    [component_container-1] [ERROR] [1644963246.113712813] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113748958] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113773770] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113795166] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113818041] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113836551] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113864376] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113885312] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113904713] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113923506] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113941735] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113964475] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.113982612] [dope_inference]: GXF Entity find failed
    [component_container-1] [ERROR] [1644963246.114001445] [dope_inference]: GXF Entity find failed
    [component_container-1] [INFO] [1644963246.114021548] [dope_inference]: Initializing...
    [component_container-1] [INFO] [1644963246.114330965] [dope_inference]: Running...
    [component_container-1] 2022-02-15 22:14:06.114 WARN  gxf/std/[email protected]: No system specified. Nothing to do
    
    
    opened by mitchsayre 3
  • isaac_ros_nitros header file dependency

    isaac_ros_nitros header file dependency

    Hi,

    Request to verify if required to add explicitly the following in isaac_ros_dnn_encoders/CMakeLists.txt:

    find_package(isaac_ros_nitros REQUIRED)
    

    The header file is required by dnn_image_encoder_node.hpp https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/blob/c14014cbf5694df8195ccdf3e52d2da6cf31020e/isaac_ros_dnn_encoders/include/isaac_ros_dnn_encoders/dnn_image_encoder_node.hpp#L24

    I was not able to get compile without adding find_package. Thanks in advance for your help.

    verify to close 
    opened by arainbilal 2
  • Native compiling error with cuda 11.4 or 11.6, VPI 1.1.11 or 1.2,in any scenario

    Native compiling error with cuda 11.4 or 11.6, VPI 1.1.11 or 1.2,in any scenario

    Hi, I receive this error with any configuration a try of Cuda,VPI,dnn,cuDNN,OpenCV, it happened me in Jetson NX a while ago too, always in 20.04

    colcon build && source install/setup.bash [0.283s] WARNING:colcon.colcon_core.verb:Some selected packages are already built in one or more underlay workspaces: 'image_geometry' is in: /opt/ros/foxy 'image_transport' is in: /opt/ros/foxy 'rcpputils' is in: /opt/ros/foxy 'camera_calibration_parsers' is in: /opt/ros/foxy 'cv_bridge' is in: /opt/ros/foxy 'camera_info_manager' is in: /opt/ros/foxy If a package in a merged underlay workspace is overridden and it installs headers, then all packages in the overlay must sort their include directories by workspace order. Failure to do so may result in build failures or undefined behavior at run time. If the overridden package is used by another package in any underlay, then the overriding package in the overlay must be API and ABI compatible or undefined behavior at run time may occur.

    If you understand the risks and want to override a package anyways, add the following to the command line: --allow-overriding camera_calibration_parsers camera_info_manager cv_bridge image_geometry image_transport rcpputils

    This may be promoted to an error in a future release of colcon-core. Starting >>> rcpputils Starting >>> image_transport Starting >>> isaac_ros_test Starting >>> isaac_ros_nvengine_interfaces Starting >>> isaac_ros_common Starting >>> nvblox_msgs Starting >>> image_geometry Starting >>> isaac_ros_apriltag_interfaces Starting >>> isaac_ros_visual_slam_interfaces Starting >>> vision_msgs Starting >>> nvblox_isaac_sim Finished <<< isaac_ros_test [1.48s]
    Finished <<< nvblox_isaac_sim [1.46s]
    Finished <<< isaac_ros_common [8.47s]
    Finished <<< image_geometry [13.3s]
    Finished <<< isaac_ros_nvengine_interfaces [13.3s] Starting >>> isaac_ros_nvengine Finished <<< nvblox_msgs [13.5s]
    Starting >>> nvblox_ros Starting >>> nvblox_nav2 Starting >>> nvblox_rviz_plugin Finished <<< rcpputils [13.7s]
    Starting >>> cv_bridge Starting >>> camera_calibration_parsers Finished <<< isaac_ros_apriltag_interfaces [15.0s]
    Finished <<< isaac_ros_visual_slam_interfaces [17.5s]
    Finished <<< vision_msgs [22.6s]
    Finished <<< camera_calibration_parsers [9.09s]
    Starting >>> camera_info_manager Finished <<< image_transport [25.8s]
    Finished <<< camera_info_manager [4.91s]
    Starting >>> image_common Finished <<< image_common [0.89s]
    Finished <<< cv_bridge [16.0s]
    Starting >>> isaac_ros_dnn_encoders Starting >>> isaac_ros_image_proc
    Starting >>> isaac_ros_stereo_image_proc Starting >>> opencv_tests Starting >>> vision_opencv Finished <<< isaac_ros_nvengine [17.1s]
    Starting >>> isaac_ros_tensor_rt Starting >>> isaac_ros_dnn_inference_test Finished <<< vision_opencv [1.37s]
    Finished <<< opencv_tests [1.40s]
    Finished <<< nvblox_nav2 [17.9s]
    Finished <<< isaac_ros_tensor_rt [5.45s]
    Finished <<< nvblox_rviz_plugin [22.8s]
    Finished <<< isaac_ros_dnn_inference_test [9.20s]
    Starting >>> isaac_ros_triton Finished <<< isaac_ros_triton [4.91s]
    --- stderr: isaac_ros_dnn_encoders
    /usr/bin/ld: libdnn_image_encoder_node.so: undefined reference to `cv::dnn::dnn4_v20211220::blobFromImage(cv::InputArray const&, double, cv::Size const&, cv::Scalar_ const&, bool, bool, int)' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/dnn_image_encoder.dir/build.make:163: dnn_image_encoder] Error 1 make[1]: *** [CMakeFiles/Makefile2:80: CMakeFiles/dnn_image_encoder.dir/all] Error 2 make: *** [Makefile:141: all] Error 2

    Failed <<< isaac_ros_dnn_encoders [15.7s, exited with code 2] Aborted <<< isaac_ros_image_proc [22.9s]
    Aborted <<< isaac_ros_stereo_image_proc [48.6s]
    Aborted <<< nvblox_ros [1min 41s]

    Summary: 23 packages finished [1min 55s] 1 package failed: isaac_ros_dnn_encoders 3 packages aborted: isaac_ros_image_proc isaac_ros_stereo_image_proc nvblox_ros 3 packages had stderr output: isaac_ros_dnn_encoders isaac_ros_image_proc isaac_ros_stereo_image_proc 8 packages not processed [email protected]:~/isaac_ws$

    opened by FPSychotic 2
  • DOPE -Running Rviz2 on Jetson

    DOPE -Running Rviz2 on Jetson

    Hello,

    I'm following the instructions for Inference on DOPE using TensorRT on a Jetson Xavier NX, and have been successful through step 8.

    However, step 9 is to run rviz2. I am running this from the docker pulled from isaac_ros_common, but the ROS2 build is base, and does not appear to have rviz2

    I have looked upstream at the the jetson-containers repo where the base image is created, but when I try to create a new base which includes ROS2 desktop rather than base, it is failing, and in reading some of the issues, even Dusty was not able to get full desktop ROS2 to build successfully.

    For step 9 in the DOPE instructions where you mention running RVIZ, did you get that to work on a Jetson? If so, can you let me know how? Thanks!

    opened by rob-pointcloud 2
  • Support for DetectNet in TensorRT

    Support for DetectNet in TensorRT

    Thanks a lot for the great Isaac project!

    I am interested in using DetectNet with TensorRT optimization, but native model support is only provided for dope & Unet. On the isaac_ros_object_detection repo it explicitly says that there is no support for tensorrt and DetectNet.

    My question is if this will be support in the future or if I will have to implement this myself?

    enhancement 
    opened by GEngels 1
  • PUBLIC key Issues

    PUBLIC key Issues

    While running the dnn inference package on X86, ubuntu 20.04 I get the public key not available error. Error pasted below.

    "W: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease' is not signed. "

    How to mitigate this ?

    opened by patelmiteshn 1
  • fix: Add NHWC tensor layout option

    fix: Add NHWC tensor layout option

    This PR adds the ability for the DnnImageEncoderNode to output tensors in NHWC format in addition to NCHW format. The tensor_layout parameter is used to select the format (either nchw or nhwc, with nchw being the default)

    opened by sidrubs 1
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

Gradient Institute 127 Dec 12, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
This repository is for DSA and CP scripts for reference.

dsa-script-collections This Repo is the collection of DSA and CP scripts for reference. Contents Python Bubble Sort Insertion Sort Merge Sort Quick So

Aditya Kumar Pandey 9 Nov 22, 2022
OpenMMLab Detection Toolbox and Benchmark

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

OpenMMLab 22.5k Jan 05, 2023
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)

Distributed Deep Learning in Open Collaborations This repository contains the code for the NeurIPS 2021 paper "Distributed Deep Learning in Open Colla

Yandex Research 96 Sep 15, 2022
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022
Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Seulki Park 70 Jan 03, 2023
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 04, 2021
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"

Diverse Structure Inpainting ArXiv | Papar | Supplementary Material | BibTex This repository is for the CVPR 2021 paper, "Generating Diverse Structure

152 Nov 04, 2022
CLIP (Contrastive Language–Image Pre-training) for Italian

Italian CLIP CLIP (Radford et al., 2021) is a multimodal model that can learn to represent images and text jointly in the same space. In this project,

Italian CLIP 114 Dec 29, 2022
Source Code For Template-Based Named Entity Recognition Using BART

Template-Based NER Source Code For Template-Based Named Entity Recognition Using BART Training Training train.py Inference inference.py Corpus ATIS (h

174 Dec 19, 2022
Unofficial Implementation of MLP-Mixer, Image Classification Model

MLP-Mixer Unoffical Implementation of MLP-Mixer, easy to use with terminal. Train and test easly. https://arxiv.org/abs/2105.01601 MLP-Mixer is an arc

Oğuzhan Ercan 6 Dec 05, 2022
Code for the paper: Fighting Fake News: Image Splice Detection via Learned Self-Consistency

Fighting Fake News: Image Splice Detection via Learned Self-Consistency [paper] [website] Minyoung Huh *12, Andrew Liu *1, Andrew Owens1, Alexei A. Ef

minyoung huh (jacob) 174 Dec 09, 2022
ProjectOxford-ClientSDK - This repo has moved :house: Visit our website for the latest SDKs & Samples

This project has moved 🏠 We heard your feedback! This repo has been deprecated and each project has moved to a new home in a repo scoped by API and p

Microsoft 970 Nov 28, 2022
Deep Sketch-guided Cartoon Video Inbetweening

Cartoon Video Inbetweening Paper | DOI | Video The source code of Deep Sketch-guided Cartoon Video Inbetweening by Xiaoyu Li, Bo Zhang, Jing Liao, Ped

Xiaoyu Li 37 Dec 22, 2022
PyTorch GPU implementation of the ES-RNN model for time series forecasting

Fast ES-RNN: A GPU Implementation of the ES-RNN Algorithm A GPU-enabled version of the hybrid ES-RNN model by Slawek et al that won the M4 time-series

Kaung 305 Jan 03, 2023
OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis

OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis Overview OpenABC-D is a large-scale labeled dataset generate

NYU Machine-Learning guided Design Automation (MLDA) 31 Nov 22, 2022
CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary.

CUP-DNN CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary. The model was trained on the expre

1 Oct 27, 2021
Auto-Encoding Score Distribution Regression for Action Quality Assessment

DAE-AQA It is an open source program reference to paper Auto-Encoding Score Distribution Regression for Action Quality Assessment. 1.Introduction DAE

13 Nov 16, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 02, 2022