当前位置:网站首页>Learning note 24 - multi sensor post fusion technology
Learning note 24 - multi sensor post fusion technology
2022-07-02 01:17:00 【FUXI_ Willard】
This blog series includes 6 A column , Respectively :《 Overview of autopilot Technology 》、《 Technical foundation of autopilot vehicle platform 》、《 Autopilot positioning technology 》、《 Self driving vehicle environment perception 》、《 Decision and control of autonomous driving vehicle 》、《 Design and application of automatic driving system 》, The author is not an expert in the field of automatic driving , Just a little white on the road of exploring automatic driving , This series has not been read , It is also thinking while reading and summarizing , Welcome to friends , Please give your suggestions in the comments area , Help the author pick out the mistakes , thank you !
This column is about 《 Self driving vehicle environment perception 》 Book notes .
2. Multisensor post fusion technology
Post fusion technology : Each sensor outputs detection data information independently , After processing the data information of each sensor , Then fuse and summarize the final perception results .
2.1 Ulm Autopilot : Modular fusion method
Ulm The University autonomous driving project proposes a modular 、 Sensor independent fusion method , Allow efficient sensor replacement , By mapping on the network 、 Multiple sensors are used in key modules such as location and tracking to ensure information redundancy . This algorithm is mainly applied to radar 、 camera 、 The detection information of the three sensors of the laser scanner is fused , Three stations IBEO LUX The laser scanner is mounted on the front bumper , The camera is mounted behind the windshield , And equipped with multiple radars .
The illustration above :
- Blue : Camera field of view ;
- Red : Laser scanner sensing range ;
- green : Radar sensing range ;
The algorithm proposes a hierarchical modular environment awareness system (HMEP), It contains three perception layers : Grid mapping 、 Location and target tracking ;
Each sensing layer will perform sensor fusion , And produce an environmental model result ; In addition to sensor data , The perception layer can also use the results of the previous layer , The order is improved according to the abstraction level of environment model elements ; The results of different perception layers may be redundant , Even contradictory , Therefore, the combination model combines all the results into a unified environment model .
- Grid mapping layer

The grid mapping layer divides the surrounding environment into a single grid cell , According to the classical occupancy grid mapping method, the proportion of each cell in the grid graph is estimated , The output result is the percentage probability of each cell ; The combination module mainly uses its output information to predict the boundary of the target object ;
Specifically : Based on sensor data , The inverse sensor model can predict the percentage probability of each cell , It is called measuring grid ; The mapping algorithm updates the grid mapping of the measurement grid by using binary Bayesian filter , And fuse the multi-sensor data into the grid mapping layer . - Positioning layer

Positioning layer fusion sensor detection data 、 Grid layer information and digital map , Output digital map with self positioning information ;
Specifically : In the grid diagram constructed by three laser scanners, the maximum stable extreme value region is used (Maximally Stable Extremal Regions,MSER) The extracted features , Features in the grid include tree trunks 、 Road signs, etc ; Display based on feature map , The positioning layer uses Monte Carlo to locate (Monte-Carlo Localization,MCL) Method to predict the target attitude . - Tracking layer

The tracking layer passes the radar 、 camera 、 Centralized fusion of lidar detection data realizes the perception of moving objects in the surrounding environment , The information from the grid mapping layer and the positioning layer can also be used to obtain the target orientation 、 Maximum speed and other information , So as to complete the task of multi-target tracking .
The fusion module uses a labeled dobernuli (Labeled Muti-Bernouli,LMB) Filter implementation , Output a list containing the spatial distribution and probability of existence of the target trajectory ; Tracking layer usage DempsterShafer Methods to realize sensor fusion perception , It can give full play to the advantages of each sensor , Avoid failure due to sensor limitations ; - Summary
The algorithm is proposed , For the future autonomous driving perception system , Its key technology is the ability to replace sensors without changing the core of the fusion system ; Each sensing layer provides a general sensor interface , It can merge additional sensors or replace existing sensors without changing the fusion core of the sensing system .
Its modular structure is helpful for the replacement of sensors , And the sensor independent interface is mapped in the network 、 The application of positioning and tracking module makes it unnecessary to make any adjustment to the fusion algorithm to modify the sensor settings .
2.2 FOP-MOC Model
Chavez-Garcia And so forth FOP-MOC Model , Take the classification information of the target as the key element of sensor fusion , The method based on evidence framework is used as the sensor fusion algorithm , It mainly solves the problem of sensor data association 、 The problem of sensor fusion .
Low level integration in SLAM Module ; The detection layer fuses the target list detected by each sensor ; The tracking layer integrates the track list of each sensor module tracking the target , To generate the final result .FOP-MOC Sensor fusion is carried out at the detection layer to improve the sensing ability of the sensing system .
FOP-MOC Model , The input information of the fusion model is : radar 、 camera 、 List of lidar detection targets , The output result is the target detection information after fusion , And send it to the tracking module ; The detection data of radar and lidar are mainly used for moving target detection , The images collected by the camera are mainly used for target classification , Each target is determined by its location 、 Size 、 The evidence distribution of Category Hypothesis , The category information is the shape from the detection result 、 Relative speed and visual appearance .
边栏推荐
- Global and Chinese markets for power over Ethernet (POE) solutions 2022-2028: Research Report on technology, participants, trends, market size and share
- Global and Chinese market of wireless charging magnetic discs 2022-2028: Research Report on technology, participants, trends, market size and share
- Datawhale community blackboard newspaper (issue 1)
- [disease detection] realize lung cancer detection system based on BP neural network, including GUI interface
- Comprehensive broadcast of global and Chinese markets 2022-2028: Research Report on technology, participants, trends, market size and share
- The first "mobile cloud Cup" empty publicity meeting, looking forward to working with developers to create a new world of computing!
- 【八大排序③】快速排序(动图演绎Hoare法、挖坑法、前后指针法)
- How does schedulerx help users solve the problem of distributed task scheduling?
- 【八大排序①】插入排序(直接插入排序、希尔排序)
- Load and domcontentloaded in JS
猜你喜欢

How does schedulerx help users solve the problem of distributed task scheduling?
![[eight sorts ②] select sort (select sort, heap sort)](/img/4b/da0d08230391d6ee48cd8cfd2f7240.png)
[eight sorts ②] select sort (select sort, heap sort)

SAP ui5 beginner tutorial XXI - trial version of custom formatter of SAP ui5

I'll teach you to visit Amazon RDS for a year and build a MySQL cloud database (only 10 minutes, really fragrant)

Common loss function of deep learning

Zak's latest "neural information transmission", with slides and videos

About asp Net core uses a small detail of datetime date type parameter

首场“移动云杯”空宣会,期待与开发者一起共创算网新世界!
![[IVX junior engineer training course 10 papers] 04 canvas and a group photo of IVX and me](/img/b8/31a498c89cf96567640677e859df4e.jpg)
[IVX junior engineer training course 10 papers] 04 canvas and a group photo of IVX and me

Minimize the error
随机推荐
Global and Chinese market of avionics systems 2022-2028: Research Report on technology, participants, trends, market size and share
cookie、session、tooken
No converter found for return value of type: class
Cat Party (Easy Edition)
Global and Chinese markets for maritime services 2022-2028: Research Report on technology, participants, trends, market size and share
学习笔记3--高精度地图关键技术(上)
关于ASP.NET CORE使用DateTime日期类型参数的一个小细节
Sql--- related transactions
Global and Chinese market of safety detection systems 2022-2028: Research Report on technology, participants, trends, market size and share
Deb file installation
The pain of Xiao Sha
AIX存储管理之总结篇
Otaku wallpaper Daquan wechat applet source code - with dynamic wallpaper to support a variety of traffic owners
【图像增强】基于Frangi滤波器实现血管图像增强附matlab代码
MySQL application day02
Iclr2022 | spherenet and g-spherenet: autoregressive flow model for 3D molecular graph representation and molecular geometry generation
GL Studio 5 安装与体验
Global and Chinese market of aircraft MRO software 2022-2028: Research Report on technology, participants, trends, market size and share
[conference resources] the Third International Conference on Automation Science and Engineering in 2022 (jcase 2022)
Global and Chinese market of picture archiving and communication system (PACS) 2022-2028: Research Report on technology, participants, trends, market size and share