MaRS - a recursive filtering framework that allows for truly modular multi-sensor integration

Related tags

Deep Learningmars_lib
Overview

MaRS Logo

Introduction

DOI IEEE License

The Modular and Robust State-Estimation Framework, or short, MaRS, is a recursive filtering framework that allows for truly modular multi-sensor integration. The framework further enables the handling of multiple sensors dynamically and performs self-calibration if auxiliary states are defined for the respective sensor module.

Features:

  • Truly-Modular decoupling of Sensor-States from the essential Navigation-States
  • Generalized covariance segmentation for Plug and Play state and covariance blocks
  • Minimal State-Representation at any point in time
  • Integration and removal of sensor modules during runtime
  • Out of sequence sensor measurement handling
  • Developed for computationally constrained platforms
  • Efficient handling of asynchronous and multi-rate sensor information
  • Separation between simple user interaction and the complexity of information handling

Austrian Patent Application Pending

Getting Started

Setup and Building the Project

Command-line setup

$ git clone <url> mars_cpp               # Get the code
$ cd mars_cpp && mkdir build && cd build # Setup the build dir
$ cmake -DCMAKE_BUILD_TYPE=Release ../   # Run cmake
$ make -j                                # Build the project

QT-Creator setup

$ mkdir mars_ws && cd mars_ws # Setup the workspace
$ git clone <url> mars_cpp    # Get the code
$ cd mars_cpp    # Navigate to the code
$ qtcreator .    # Run QT-Creator and press 'Configure Project'

Dependencies

MaRS has three dependencies which are automatically downloaded and linked against:

  • Eigen
  • yaml-cpp
  • G-Test

Thus, no dependencies need to be installed by hand.

Code Documentation

You can find find the Doxygen generated code documentation, after the project was built, in:

mars_cpp/build/docs/api-docs/html/index.html

Run tests

The MaRS framework has multiple options to perform tests on the code base, individual tests in classes, end-to-end tests with simulated data and isolated compilation for dependency checks.

Google tests

The test suit mars-test performs tests on all classes and ensures that the member functions perform according to their definition. MaRS also provides two end-to-end tests, which are combined in the mars-e2e-test test suit. The two tests consist of an IMU propagation only scenario and an IMU with pose update scenario. The input for both test cases are synthetically generated datasets, and the end result of the test run is compared to the ground truth.

$ cd build                # Enter the build directory and run:
$ make test mars-test     # Tests for individual classes
$ make test mars-e2e-test # End to end tests with simulated data

End to end test description

Test Name Description
mars_e2e_imu_prop IMU propagation only
mars_e2e_imu_pose_update IMU propagation and pose updates (IMU and pose updates are in synch)

Isolated Build and Tests with Docker

$ cd mars_cpp # Enter the source directory
$ docker build --network=host -t mars_test_env:latest . # Build the Docker image

# The following runs the container, maps the source code (read only)
# and executes the script in 'deploy/scripts/docker_application_test.sh'
$ docker run -it --rm \
  --network=host \
  -v "$(pwd)":/source:ro \
  mars_test_env:latest

Programming

The code base is mostly C++ based and follows the C++ Google style convention. A C-Lang file with formating definitions / for auto formatting can be found in the root directory of the project mars_lib/.clang-format.

The Framework

Technology

MaRS uses covariance and state-vector segmentation to allow for a modular structure and the removal and integration of covariance blocks at a given point in time. This method, in particular, allows the removal of the state augmentation for one sensor modality, the processing of the essential navigation states in separation, and the consecutive augmentation of the navigation states as the particular measurements become available. The majority of previous frameworks would carry the state augmentation for each individual sensor at every processing step, which consumes additional computational power.

As an example: The image below, on the left side, shows the layout of a covariance matrix for a system with three main components. The components are the Navigation States (Core States), a sensor with calibration states (Sensor A), and a second sensor with calibration states (Sensor B). Previous frameworks would handle the full-sized covariance matrix throughout every filtering step which includes, the propagation of the core-states, updates of Sensor A or Sensor B, as well as repropagation in the case of delayed sensor measurements.

The right side of this diagram shows how MaRS is handling the same events. The navigation states are updated in isolation; in the event of a sensor measurement, the corresponding sensor information such as the state, sensor covariance, and cross-correlations with the core states are merged with the core state and updated. During this update step, it's important to notice that the covariance elements of the sensor have not been propagated while the core state has been propagated. The MaRS framework is correcting the individual covariance elements before the merging process.

The same holds for a multi-sensor system in which the update sequence is Sensor A -> Sensor-B -> Sensor-A. The update of Sensor-B would interfere with the straightforward covariance propagation because the covariance of the core state is updated, but the decupled and stored information on Sensor-A was not updated accordingly. MaRS uses Eigen-Value correction to correct for the missing information before updating Sensor-A in a consecutive step. Thus, the properties and coherence of the covariance matrices are guaranteed at all times.

Covariance Segmentation

For the full explanation of the technology, please refer to the MaRS Paper here, or watch this video animation on YouTube for an easier understanding of the covariance-segmentation process:

MaRS Process Animation

Assumptions

It is important to mention that MaRS makes one particular assumption for this method. As illustrated by the image on the covariance segments above, the Sensor-Core cross-covariances (purple elements) are handled correctly. However, the Sensor-Sensor cross-covariance (gray elements) is assumed to be zero and thus not stored throughout the process.

Implementation / System Architecture

System Layout

The MaRS framework is designed such that essential modules are interchangeable, and the definition of the core logic is agnostic to any sensor definition. For this, each component has generically defined interfaces. A simplified Pseudo UML diagram of the system elements is shown below.

Simple Class Structure

This class structure is designed with simplified user-level interaction. This means that components that are likely to change or extended are simple to interact with, and more complex components are less likely the object to modification. The graph below shows the essential components and their complexity level. It's important to mention that changes or the addition of a sensor update module does not require modifications in any other component.

Modification Levels

If you are interested in how the actual UML diagram for MaRS looks like, you can have a look at the image below or check out the Doxygen dependency graphs described here.

Class Structure

The Buffer and its Data Structure

MaRS has a single buffer that holds all the information needed to operate the framework and to perform out-of-order updates. The data outline of a single buffer entry looks as follows:

Buffer Data Structure

Each buffer entry has a timestamp, a Core/Sensor data block, a metadata field for indicating the entry type (e.g., measurement, sensor update, core state), and a reference to the update sensor class instance for processing the Core/Sensor data field.

Note: When using the provided buffer API, the order of buffer entries is guaranteed to be chronological.

MaRS uses type erasure to allow the storage of different sensor types. Even though sensor modules are derived from the same abstract class, the overlaying structure can change for each sensor type. Thus MaRS casts the Core/Sensor data block to a shared void pointer. The referenced sensor module instance is then used to cast the individual type correctly.

Shared pointers are heavily used for this framework, especially for the storage of states and shared usage of the various modules of the framework. The buffer holds shared pointers to the state elements. These elements are referenced in the buffer and passed to sensor instances for processing. The smart pointers are passed to functions that use the move() function to generate a copy of the pointer. This is more efficient, and it makes clear that the particular object is a part-owner of this object.

On particular important feature of shared pointers are shared void pointers. If the last reference to this data element is deleted, then the shared pointer automatically calls the destructor of the void object. Normal pointers to void objects do not follow this behavior. If a normal void pointer goes out of scope, then the referenced data element remains allocated and causes a memory leak.

This property is important for the design of the framework because it makes use of type erasure and thus, stores references to objects as void pointers. Thus, states in the buffer that are removed because they exceed the defined maximal storage of the states in the buffer are destructed automatically.

Measurement handling

All measurements are handled in the same fashion, which makes it simple for any middleware integration. As described in this section, a dedicated propagation sensor is defined on system start. Based on this, the system can distinguish between propagation and update sensors. To process any measurement, only the following line needs to be called:

// sensor_sptr: Pointer to the corresponding sensor instance
// timestamp: Measurement timestamp
// data: MaRS buffer data element
core_logic_.ProcessMeasurement(sensor_sptr, timestamp, data)

This is the content of any MaRS buffer entry and needs to be available on e.g., a callback triggered by a specific sensor. Within the ProcessMeasurement(...) function, MaRS divides the process for propagation and updates and performs the routines shown below. At this point, the framework also applies logic to determine if the measurement is still meaningful to the system (e.g., it could be out of order and too old) and processes out of order updates that are within limits accordingly (see here).

State Propagation UML

Propagation UML

Sensor Updates UML

Update UML

Out of Order Updates

The handling of out-of-order updates within MaRS is straightforward. The general buffer history has one measurement followed by a state update entry. If an out-of-order measurement occurs, it's added to the buffer depending on its timestamp. All following state entries up to the current state are deleted (based on the metadata tag), and the measurements are feed to the ProcessMeasurement(...) function to generate the new state updates up to the current time.

Advanced Initialization

MaRS has a second buffer instance that can be used for advanced initialization. All sensor data for propagation or updates can be stored in this buffer and used in a pre-processing step before the actual initialization. By default, the framework uses the latest propagation sensor for initialization. The sequence of events is shown below:

Initialization UML

Default Navigation State

The Navigation-States (core states) are the essential states to localize a robot in the world, given the fact that these states are propagated with IMU measurements.

General Nomenclature

The translation defines frame with respect to frame expressed in frame . The translation is expressed in frame if the subscript is not defined. The quaternion describes the rotation of frame with respect to frame . denotes the conversion of quaternion to its corresponding rotation matrix. Please note that this framework uses the Hamilton notation for the Quaternion representation.

Symbols

Symbol Description
Navigation State
Translation of the robot IMU/body frame expressed w.r.t. the world frame
Velocity of the robot IMU/body frame expressed w.r.t. the world frame
Orientation of the robot IMU/body frame expressed w.r.t. the world frame (Hamiltonian)
Gyroscopic bias
Accelerometer bias
Zero mean white Gaussian noise of the gyroscope measurement
Zero mean white Gaussian noise of the accelerometer measurement
The right side Quaternion multiplication matrix for

State-Definition

State-Dynamics

Provided Sensor Modules (Plug and Play)

New sensor modules can be added in a simple and straightforward fashion. Please consult the Tutorial section on how to use existing sensor modules and how to implement new sensor modules. Please find a list of pre-defined sensor modules below.

Position (3 DoF)

Symbols:

Symbol Definition
Translation of the position sensor w.r.t. the robot IMU/body frame

Measurement equation:

Pose (6 DoF)

Symbols:

Symbol Definition
Translation of the pose sensor w.r.t. the robot IMU/body frame
Orientation of the pose sensors w.r.t. the robot IMU/body frame

Measurement equation:

Loosely-Coupled Vision (6 DoF)

Symbols:

Symbol Definition
Translation of the vision world frame w.r.t. the world frame
Orientation of the vision world frame w.r.t. the world frame
Translation of the camera sensor w.r.t. the robot IMU/body frame
Vision scale

Measurement equation:

GNSS with local coordinate transforms (3 DoF)

Symbols:

Symbol Definition
Translation of the GNSS world frame w.r.t. the world frame
Orientation of the GNSS world frame w.r.t. the world frame
Translation of the GNSS sensor w.r.t. the robot IMU/body frame

Measurement equation:

Magnetometer (3 DoF)

Symbols:

Symbol Definition
Orientation of the Magnetometer w.r.t. the robot IMU/body frame
Magnetic field vector expressed in the world frame

Measurement equation:

Barometric Pressure (1 DoF)

Symbols:

Symbol Definition
Translation of the pressure sensor w.r.t. the robot IMU/body frame

Measurement equation:

Package Layout/Codebase

Generated with tree -a -L 3 --noreport --charset unicode > layout.md

β”œβ”€β”€ CMakeLists.txt
β”œβ”€β”€ include
β”‚   └── mars
β”‚       β”œβ”€β”€ buffer.h
β”‚       β”œβ”€β”€ core_logic.h
β”‚       β”œβ”€β”€ core_state.h
β”‚       β”œβ”€β”€ data_utils
β”‚       β”‚   β”œβ”€β”€ read_csv.h
β”‚       β”‚   β”œβ”€β”€ read_pose_data.h
β”‚       β”‚   └── read_sim_data.h
β”‚       β”œβ”€β”€ ekf.h
β”‚       β”œβ”€β”€ general_functions
β”‚       β”‚   └── utils.h
β”‚       β”œβ”€β”€ mars_api.h
β”‚       β”œβ”€β”€ mars_export.h
β”‚       β”œβ”€β”€ mars_features.h
β”‚       β”œβ”€β”€ nearest_cov.h
β”‚       β”œβ”€β”€ sensor_manager.h
β”‚       β”œβ”€β”€ sensors
β”‚       β”‚   β”œβ”€β”€ bind_sensor_data.h
β”‚       β”‚   β”œβ”€β”€ imu
β”‚       β”‚   β”‚   β”œβ”€β”€ imu_measurement_type.h
β”‚       β”‚   β”‚   └── imu_sensor_class.h
β”‚       β”‚   β”œβ”€β”€ pose
β”‚       β”‚   β”‚   β”œβ”€β”€ pose_measurement_type.cpp
β”‚       β”‚   β”‚   β”œβ”€β”€ pose_measurement_type.h
β”‚       β”‚   β”‚   β”œβ”€β”€ pose_sensor_class.cpp
β”‚       β”‚   β”‚   β”œβ”€β”€ pose_sensor_class.h
β”‚       β”‚   β”‚   β”œβ”€β”€ pose_sensor_state_type.cpp
β”‚       β”‚   β”‚   └── pose_sensor_state_type.h
β”‚       β”‚   β”œβ”€β”€ position
β”‚       β”‚   β”‚   └── ...
β”‚       β”‚   β”œβ”€β”€ sensor_abs_class.h
β”‚       β”‚   β”œβ”€β”€ sensor_interface.h
β”‚       β”‚   └── update_sensor_abs_class.h
β”‚       β”œβ”€β”€ TBuffer.h
β”‚       β”œβ”€β”€ time.h
β”‚       β”œβ”€β”€ timestamp.h
β”‚       └── type_definitions
β”‚           β”œβ”€β”€ base_states.h
β”‚           β”œβ”€β”€ buffer_data_type.h
β”‚           β”œβ”€β”€ buffer_entry_type.h
β”‚           β”œβ”€β”€ core_state_type.h
β”‚           └── core_type.h
β”œβ”€β”€ include_local
β”‚   └── mars
β”‚       β”œβ”€β”€ helper.hpp
β”‚       └── utils.hpp
└── source
    β”œβ”€β”€ buffer.cpp
    β”œβ”€β”€ buffer_entry_type.cpp
    β”œβ”€β”€ core_logic.cpp
    β”œβ”€β”€ core_state_calc_q.cpp
    β”œβ”€β”€ core_state.cpp
    β”œβ”€β”€ ekf.cpp
    β”œβ”€β”€ nearest_cov.cpp
    β”œβ”€β”€ sensor_abs_class.cpp
    β”œβ”€β”€ sensor_interface.cpp
    β”œβ”€β”€ sensor_manager.cpp
    β”œβ”€β”€ time.cpp
    β”œβ”€β”€ timestamp.cpp
    └── utils.cpp

Tutorials

This section explains the interaction with the essential components of the MaRS Framework and how to customize components to different state-estimation modularities.

Stand-Alone Usage and Middleware integration

Due to the usage of the MaRS C++ API, the direct use of the MaRS library or using the library within any middleware framework is possible with no significant overhead.

The only difference between the two scenarios is the method in which the data is passed to the framework. To use MaRS without any middleware, you can have a look at one of the end2end tests that MaRS provides and define your own executable in the same fashion. You can use the outline of source/test/mars-e2e-test/mars_e2e_imu_pose_update.cpp as a reference. MaRS already provides methods to read data from a CSV file and store results after processing.

For the implementation of a MaRS wrapper, you can use this project as a reference: MaRS ROS. The sections below make use of code snippets from the MaRS ROS wrapper.

Essential Component Instantiation

MaRS needs a few persistent component instantiations to operate. The instantiation and association of these components are shown below.

The code

(); core_states_sptr_.get()->set_propagation_sensor(imu_sensor_sptr_); core_logic_ = mars::CoreLogic(core_states_sptr_); core_logic_.buffer_.set_max_buffer_size(800); core_states_sptr_->set_noise_std( Eigen::Vector3d(0.013, 0.013, 0.013), Eigen::Vector3d(0.0013, 0.0013, 0.0013), Eigen::Vector3d(0.083, 0.083, 0.083), Eigen::Vector3d(0.0083, 0.0083, 0.0083)); // Sensor Definition pose1_sensor_sptr_ = std::make_shared ("Pose1", core_states_sptr_); Eigen::Matrix pose_meas_std; pose_meas_std << 0.05, 0.05, 0.05, 3 * (M_PI / 180), 3 * (M_PI / 180), 3 * (M_PI / 180); pose1_sensor_sptr_->R_ = pose_meas_std.cwiseProduct(pose_meas_std); // Sensor Calibration PoseSensorData pose_calibration; pose_calibration.state_.p_ip_ = Eigen::Vector3d(0.0, 0.0, 0.0); pose_calibration.state_.q_ip_ = Eigen::Quaterniond::Identity(); Eigen::Matrix pose_cov; pose_cov.setZero(); pose_cov.diagonal() << 0.0025, 0.0025, 0.0025, 0.0076, 0.0076, 0.0076; // 5cm, 5deg pose_calibration.sensor_cov_ = pose_cov; pose1_sensor_sptr_->set_initial_calib(std::make_shared (pose_calibration)); ">
#include <mars/core_logic.h>
#include <mars/core_state.h>
#include <mars/sensors/imu/imu_measurement_type.h>
#include <mars/sensors/imu/imu_sensor_class.h>
#include <mars/sensors/pose/pose_measurement_type.h>
#include <mars/type_definitions/buffer_data_type.h>
#include <mars/type_definitions/buffer_entry_type.h>
#include <mars_msg_conv.h>
#include <sensor_msgs/Imu.h>
#include <Eigen/Dense>

// Setup Framework Components
imu_sensor_sptr_ = std::make_shared
       (
       "IMU");
core_states_sptr_ = std::make_shared
       
        ();
core_states_sptr_.get()->
        set_propagation_sensor(imu_sensor_sptr_);
core_logic_ = mars::CoreLogic(core_states_sptr_);
core_logic_.buffer_.set_max_buffer_size(
        800);

core_states_sptr_->
        set_noise_std(
    
        Eigen::Vector3d(
        0.013, 
        0.013, 
        0.013), Eigen::Vector3d(
        0.0013, 
        0.0013, 
        0.0013),
    Eigen::Vector3d(
        0.083, 
        0.083, 
        0.083), Eigen::Vector3d(
        0.0083, 
        0.0083, 
        0.0083));


        // Sensor Definition
pose1_sensor_sptr_ = std::make_shared
        
         (
         "Pose1", core_states_sptr_);
Eigen::Matrix<
         double, 
         6, 
         1> pose_meas_std;
pose_meas_std << 
         0.05, 
         0.05, 
         0.05, 
         3 * (M_PI / 
         180), 
         3 * (M_PI / 
         180), 
         3 * (M_PI / 
         180);
pose1_sensor_sptr_->R_ = pose_meas_std.cwiseProduct(pose_meas_std);


         // Sensor Calibration
PoseSensorData pose_calibration;
pose_calibration.state_.p_ip_ = Eigen::Vector3d(
         0.0, 
         0.0, 
         0.0);
pose_calibration.state_.q_ip_ = Eigen::Quaterniond::Identity();
Eigen::Matrix<
         double, 
         6, 
         6> pose_cov;
pose_cov.setZero();
pose_cov.diagonal() << 
         0.0025, 
         0.0025, 
         0.0025, 
         0.0076, 
         0.0076, 
         0.0076;  
         // 5cm, 5deg
pose_calibration.sensor_cov_ = pose_cov;
pose1_sensor_sptr_->
         set_initial_calib(std::make_shared
         
          (pose_calibration));
         
        
       
      

The code explained

imu_sensor_sptr_ = std::make_shared
   (
   "IMU");
  

This line initialized the IMU sensor instance. Each sensor class can be given a name that is used in logs and warnings for clarity.

core_states_sptr_ = std::make_shared
   ();
  

This line creates the core states. Core states are only instantiated once and are used to provide methods on how to operate on the core state throughout the framework.

core_states_sptr_.get()->set_propagation_sensor(imu_sensor_sptr_);

At this point, the connection between the core states and the propagation sensor is made. This is used for e.g., the definition on how an incoming measurement is handled, and the information for the propagation process is routed accordingly.

pose_meas_std; pose_meas_std << 0.05, 0.05, 0.05, 3 * (M_PI / 180), 3 * (M_PI / 180), 3 * (M_PI / 180); pose1_sensor_sptr_->R_ = pose_meas_std.cwiseProduct(pose_meas_std); ">
// Sensors
pose1_sensor_sptr_ = std::make_shared
   (
   "Pose1", core_states_sptr_);
Eigen::Matrix<
   double, 
   6, 
   1> pose_meas_std;
pose_meas_std << 
   0.05, 
   0.05, 
   0.05, 
   3 * (M_PI / 
   180), 
   3 * (M_PI / 
   180), 
   3 * (M_PI / 
   180);
pose1_sensor_sptr_->R_ = pose_meas_std.cwiseProduct(pose_meas_std);
  

These lines initialize a single sensor. First, similar to the IMU sensor instantiation, a shared pointer to the specific sensor object is created. Update sensors, compared to the propagation sensor, require the knowledge of the core state design. Thus, the shared pointer to the common core object needs to be provided for the sensor instantiation. Second, each sensor has a measurement noise field that depends on the state definition. Here, the values for the measurement noise are stored in pose1_sensor_sptr_->R_.

// Sensor Calibration
PoseSensorData pose_calibration;
pose_calibration.state_.p_ip_ = Eigen::Vector3d(0.0, 0.0, 0.0);
pose_calibration.state_.q_ip_ = Eigen::Quaterniond::Identity();
Eigen::Matrix<double, 6, 6> pose_cov;
pose_cov.setZero();
pose_cov.diagonal() << 0.0025, 0.0025, 0.0025, 0.0076, 0.0076, 0.0076;  // 5cm, 5deg
pose_calibration.sensor_cov_ = pose_cov;
pose1_sensor_sptr_->set_initial_calib(std::make_shared
   (pose_calibration));
  

Each sensor, given that it has calibration states, has the option to initialize this calibration. Depending on the sensor module definition, the calibration states are automatically calibrated if sensor->set_initial_calib( ..) was not called.

The first lines instantiate a sensor state object that is set in consecutive lines. The second part of these lines generates the covariance matrix and map it to the state object. In the last line, the state object is passed to the sensor instance to set the calibration parameter.

Navigation State Propagation

The routine for the propagation of the navigation states through propagation sensor measurements is generally not different from the sensor update routine. However, this routine includes the initialization of the filter and is thus shown for completeness.

The code

void MarsWrapperPose::ImuMeasurementCallback(const sensor_msgs::ImuConstPtr& meas)
{
  // Map the measutement to the mars type
  Time timestamp(meas->header.stamp.toSec());

  // Generate a measurement data block
  BufferDataType data;
  data.set_sensor_data(std::make_shared
   
                       (
   MarsMsgConv::ImuMsgToImuMeas(*meas)));

  
   // Call process measurement
  core_logic_.
   ProcessMeasurement(imu_sensor_sptr_, timestamp, data);

  
   // Initialize the first time at which the propagation sensor occures
  
   if (!core_logic_.
   core_is_initialized_)
  {
    core_logic_.
   Initialize(p_wi_init_, q_wi_init_);
  }

  
   if (publish_on_propagation_)
  {
     mars::BufferEntryType latest_state;
     core_logic_.
   buffer_.
   get_latest_state(&latest_state);
      
     mars::CoreStateType latest_core_state = 
   static_cast
   
    
         (latest_state.
    data_.
    core_.
    get())->
    state_;
      
     pub_ext_core_state_.
    publish(
    MarsMsgConv::ExtCoreStateToMsg(
          latest_state.
    timestamp_.
    get_seconds(), latest_core_state));
  }
}
   
  

The code explained

// Map the measutement to the mars type
Time timestamp(meas->header.stamp.toSec());

// Generate a measurement data block
BufferDataType data;
data.set_sensor_data(std::make_shared
   
                     (MarsMsgConv::ImuMsgToImuMeas(*meas)));
  

In the first lines, the data types which may be specific to the middleware are mapped to internal MaRS types. This concerns the timestamp type and sensor measurement. For the ROS wrapper, the MaRS framework already provides conversions between ROS messages and various sensor measurement types. These are then mapped to the buffer data field.

// Call process measurement
core_logic_.ProcessMeasurement(imu_sensor_sptr_, timestamp, data);

In the next step, this information is passed to the ProcessMeasurement routine. This routine requires the shared pointer to the individual sensor instance, the timestamp associated with the measurement, and the measurement in the form of a buffer element data type.

// Initialize the first time at which the propagation sensor occures
if (!core_logic_.core_is_initialized_)
{
  core_logic_.Initialize(p_wi_init_, q_wi_init_);
}

The ProcessMeasurement routine does not start the filtering process until the Initialize() member function is called. This function can be altered, but in the default case, it uses the latest pose information for the initialization.

mars::BufferEntryType latest_state;
core_logic_.buffer_.get_latest_state(&latest_state);

mars::CoreStateType latest_core_state = static_cast
   
     (latest_state.data_.core_.get())->state_;
  

The last segments cover the call for the latest data entries and their consecutive publishing. After calling the ProcessMeasurement() function, the buffer is up to date, and the latest entry contains the latest state. First, we instantiate a variable of the buffer entry type, which will receive the latest state in the next step with get_latest_state(). After this, we get the "state" element from the data field of the buffer. Since this is a void type, we need to cast it to the CoreType before usage. The buffer entry data type is described here

pub_ext_core_state_.publish(MarsMsgConv::ExtCoreStateToMsg(
     latest_state.timestamp_.get_seconds(), latest_core_state));

In the final step, we convert the state information to a ROS message and publish it accordingly.

Sensor Updates

The code

void MarsWrapperPose::PoseMeasurementUpdate(
std::shared_ptr
    sensor_sptr, 

   const PoseMeasurementType& pose_meas, 
   const Time& timestamp)
{
  
   // TMP feedback init pose
  p_wi_init_ = pose_meas.
   position_;
  q_wi_init_ = pose_meas.
   orientation_;

  
   // Generate a measurement data block
  BufferDataType data;
  data.
   set_sensor_data(std::make_shared
   
    (pose_meas));

  
    // Call process measurement
  
    if (!core_logic_.
    ProcessMeasurement(sensor_sptr, timestamp_corr, data))
  {
    
    return;
  }

  
    // Publish the latest sensor state
  mars::BufferEntryType latest_result;
  core_logic_.
    buffer_.
    get_latest_sensor_handle_state(
     sensor_sptr, &latest_result);

  mars::CoreStateType latest_core_state = 
    static_cast
    
     
     (latest_state.
     data_.
     core_.
     get())->
     state_;
  pub_ext_core_state_.
     publish(
     MarsMsgConv::ExtCoreStateToMsg(
     latest_state.
     timestamp_.
     get_seconds(), latest_core_state));
    
  mars::PoseSensorStateType pose_sensor_state = 
     sensor_sptr.
     get()->
     get_state(latest_result.
     data_.
     sensor_);
  pub_pose1_state_.
     publish(
     MarsMsgConv::PoseStateToPoseMsg(
     latest_result.
     timestamp_.
     get_seconds(), pose_sensor_state));
}
    
   
  

The code explained

// TMP feedback init pose
p_wi_init_ = pose_meas.position_;
q_wi_init_ = pose_meas.orientation_;

These lines depend on the specific usage of the filter, but in general, you need information on how to initialize the filter. In this case, we store the latest pose of a pose sensor and pass this to the core state initialization routine of the filter should be initialized.

// Generate a measurement data block
BufferDataType data;
data.set_sensor_data(std::make_shared
   (pose_meas));
  

At this point, we are simply mapping the sensor measurement, which is already provided as the sensor's measurement type.

// Call process measurement
if (!core_logic_.ProcessMeasurement(sensor_sptr, timestamp_corr, data))
{
  return;
}

Similar to the propagation routine, we pass the shared pointer to the specific sensor instance, the time stamp, and sensor measurement to the ProcessMeasurement function. The function returns true if the update was successful.

// Publish the latest sensor state
mars::BufferEntryType latest_result;
core_logic_.buffer_.get_latest_sensor_handle_state(
   sensor_sptr, &latest_result);

At this point, we get the latest core state and state of the sensor (if this sensor has calibration states defined). First, we generate a buffer entry, then get_latest_sensor_handle_state returns the latest state which was generated by the sensor, associated with the shared pointer sensor_sptr.

mars::CoreStateType latest_core_state = static_cast
   
   (latest_state.data_.core_.get())->state_;
pub_ext_core_state_.publish(MarsMsgConv::ExtCoreStateToMsg(
   latest_state.timestamp_.get_seconds(), latest_core_state));
  

Here we use the buffer entry from the previous step and extract the core state information, which is part of the buffer entry data field. The core state MaRS data type is then converted to a ROS message and published.

mars::PoseSensorStateType pose_sensor_state = 
   sensor_sptr.get()->get_state(latest_result.data_.sensor_);
pub_pose1_state_.publish(MarsMsgConv::PoseStateToPoseMsg(
   latest_result.timestamp_.get_seconds(), pose_sensor_state));

These lines act in the same way as for the core state publishing described above. The only difference is that we are accessing the buffer data sensor field and convert the MaRS sensor data type to another ROS message type. Also, instead of casting the type by hand, the sensor class already provides a getter function to perform the case.

Implementation of new Sensor Models

The definition of a sensor requires three essential files. These files should be placed in a dedicated folder.

Files to copy/generate in source/mars/include/mars/sensors/ :

  • _measurement_type.h
  • _sensor_state_type.h
  • _sensor_class.h

You can copy the files from the pose sensor module and use them as a template.

Steps:

  1. Define the data type for your sensor measurements in _measurement_type.h
  2. Define the sensor state in _sensor_state_type.h
  3. In _sensor_class.h :
    1. Edit the data types to the definition from the two files above
    2. Edit the initialization routine
    3. Edit the measurement mapping
    4. Edit the definition of the Jacobian
    5. Edit the definition of the residual
    6. Edit the definition of the ApplyCorrection(...) member function

Add the files to the CMakelist:

CMakeLists in source/mars/CMakeLists.txtAdd the sensor-files to:

set(headers
    ...
    ${include_path}/sensors/
   /
   
    _measurement_type.h
    
    ${include_path}/sensors/
    
     /
     
      _sensor_class.h
    
      ${include_path}/sensors/
      
       /
       
        _sensor_state_type.h ... )
       
      
     
    
   
  

If your sensor requires helper classes or have .cpp files, you need to add them to the following section:

set(sources
    ...
    ${include_path}/sensors/
   
    ...
    )
  

After these steps have been completed, you can use the sensor as a generalized object in the MaRS framework.

Demonstrations

Closed-Loop Modular Position Estimation

Setup:

  • Ublox GPS Receiver (8Hz)
  • Pixhawk Autopilot IMU (200Hz)
  • MaRS GNSS Pose-Estimation
  • PX4 State-Estimation Bridge
    • Clean and direct interface to our MARS estimator without chained EKF structures (e.g., via build-in PX4 EKF2)
    • Link to the complete flight setup: AAU CNS Flight Package

MaRS GNSS Demo

Known Issues

  • Process Noise: At the very moment, MaRS only supports the definition of dynamic process noise for the core states and not for sensor auxiliary/calibration states.

Contact

For further information, please contact Christian Brommer

License

This software is made available to the public to use (source-available), licensed under the terms of the BSD-2-Clause-License with no commercial use allowed, the full terms of which are made available in the LICENSE file. No license in patents is granted.

Usage for academic purposes

If you use this software in an academic research setting, please cite the corresponding paper and consult the LICENSE file for a detailed explanation.

@inproceedings{brommer2020,
   author   = {Brommer, Christian and Jung, Roland and Steinbrener, Jan and Weiss, Stephan},
   doi      = {10.1109/LRA.2020.3043195},
   journal  = {IEEE Robotics and Automation Letters},
   title    = {{MaRS : A Modular and Robust Sensor-Fusion Framework}},
   year     = {2020}
}
Owner
Control of Networked Systems - University of Klagenfurt
Control of Networked Systems - University of Klagenfurt
General purpose Slater-Koster tight-binding code for electronic structure calculations

tight-binder Introduction General purpose tight-binding code for electronic structure calculations based on the Slater-Koster approximation. The code

9 Dec 15, 2022
Code release for SLIP Self-supervision meets Language-Image Pre-training

SLIP: Self-supervision meets Language-Image Pre-training What you can find in this repo: Pre-trained models (with ViT-Small, Base, Large) and code to

Meta Research 621 Dec 31, 2022
Code for the KDD 2021 paper 'Filtration Curves for Graph Representation'

Filtration Curves for Graph Representation This repository provides the code from the KDD'21 paper Filtration Curves for Graph Representation. Depende

Machine Learning and Computational Biology Lab 16 Oct 16, 2022
Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
Pytorch implementation of Implicit Behavior Cloning.

Implicit Behavior Cloning - PyTorch (wip) Pytorch implementation of Implicit Behavior Cloning. Install conda create -n ibc python=3.8 pip install -r r

Kevin Zakka 49 Dec 25, 2022
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 01, 2022
Official implementation of the PICASO: Permutation-Invariant Cascaded Attentional Set Operator

PICASO Official PyTorch implemetation for the paper PICASO:Permutation-Invariant Cascaded Attentive Set Operator. Requirements Python 3 torch = 1.0 n

Samira Zare 0 Dec 23, 2021
CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator

CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator This is the official code repository for NeurIPS 2021 paper: CARMS: Categorica

Alek Dimitriev 1 Jul 09, 2022
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

Andrew Jesson 9 Apr 04, 2022
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

Homepage | Paper | Datasets | Leaderboard | Documentation Graph Robustness Benchmark (GRB) provides scalable, unified, modular, and reproducible evalu

THUDM 66 Dec 22, 2022
Share a benchmark that can easily apply reinforcement learning in Job-shop-scheduling

Gymjsp Gymjsp is an open source Python library, which uses the OpenAI Gym interface for easily instantiating and interacting with RL environments, and

134 Dec 08, 2022
Low Complexity Channel estimation with Neural Network Solutions

Interpolation-ResNet Invited paper for WSA 2021, called 'Low Complexity Channel estimation with Neural Network Solutions'. Low complexity residual con

Dianxin 10 Dec 10, 2022
🚩🚩🚩

My CTF Challenges 2021 AIS3 Pre-exam / MyFirstCTF Name Category Keywords Difficulty β’Έβ“„β“‹β’Ύβ’Ή-①⑨ (MyFirstCTF Only) Reverse Baby β˜… Piano Reverse C#, .NET β˜…

6 Oct 28, 2021
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Official Pytorch implementation of "Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral)"

Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral): Official Project Webpage This repository provides the off

Kakao Enterprise Corp. 68 Dec 17, 2022
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022
This program presents convolutional kernel density estimation, a method used to detect intercritical epilpetic spikes (IEDs)

Description This program presents convolutional kernel density estimation, a method used to detect intercritical epilpetic spikes (IEDs) in [Gardy et

Ludovic Gardy 0 Feb 09, 2022