当前位置:网站首页>Learning note 24 - multi sensor post fusion technology
Learning note 24 - multi sensor post fusion technology
2022-07-02 01:17:00 【FUXI_ Willard】
This blog series includes 6 A column , Respectively :《 Overview of autopilot Technology 》、《 Technical foundation of autopilot vehicle platform 》、《 Autopilot positioning technology 》、《 Self driving vehicle environment perception 》、《 Decision and control of autonomous driving vehicle 》、《 Design and application of automatic driving system 》, The author is not an expert in the field of automatic driving , Just a little white on the road of exploring automatic driving , This series has not been read , It is also thinking while reading and summarizing , Welcome to friends , Please give your suggestions in the comments area , Help the author pick out the mistakes , thank you !
This column is about 《 Self driving vehicle environment perception 》 Book notes .
2. Multisensor post fusion technology
Post fusion technology : Each sensor outputs detection data information independently , After processing the data information of each sensor , Then fuse and summarize the final perception results .
2.1 Ulm Autopilot : Modular fusion method
Ulm The University autonomous driving project proposes a modular 、 Sensor independent fusion method , Allow efficient sensor replacement , By mapping on the network 、 Multiple sensors are used in key modules such as location and tracking to ensure information redundancy . This algorithm is mainly applied to radar 、 camera 、 The detection information of the three sensors of the laser scanner is fused , Three stations IBEO LUX The laser scanner is mounted on the front bumper , The camera is mounted behind the windshield , And equipped with multiple radars .
The illustration above :
- Blue : Camera field of view ;
- Red : Laser scanner sensing range ;
- green : Radar sensing range ;
The algorithm proposes a hierarchical modular environment awareness system (HMEP), It contains three perception layers : Grid mapping 、 Location and target tracking ;
Each sensing layer will perform sensor fusion , And produce an environmental model result ; In addition to sensor data , The perception layer can also use the results of the previous layer , The order is improved according to the abstraction level of environment model elements ; The results of different perception layers may be redundant , Even contradictory , Therefore, the combination model combines all the results into a unified environment model .
- Grid mapping layer
The grid mapping layer divides the surrounding environment into a single grid cell , According to the classical occupancy grid mapping method, the proportion of each cell in the grid graph is estimated , The output result is the percentage probability of each cell ; The combination module mainly uses its output information to predict the boundary of the target object ;
Specifically : Based on sensor data , The inverse sensor model can predict the percentage probability of each cell , It is called measuring grid ; The mapping algorithm updates the grid mapping of the measurement grid by using binary Bayesian filter , And fuse the multi-sensor data into the grid mapping layer . - Positioning layer
Positioning layer fusion sensor detection data 、 Grid layer information and digital map , Output digital map with self positioning information ;
Specifically : In the grid diagram constructed by three laser scanners, the maximum stable extreme value region is used (Maximally Stable Extremal Regions,MSER) The extracted features , Features in the grid include tree trunks 、 Road signs, etc ; Display based on feature map , The positioning layer uses Monte Carlo to locate (Monte-Carlo Localization,MCL) Method to predict the target attitude . - Tracking layer
The tracking layer passes the radar 、 camera 、 Centralized fusion of lidar detection data realizes the perception of moving objects in the surrounding environment , The information from the grid mapping layer and the positioning layer can also be used to obtain the target orientation 、 Maximum speed and other information , So as to complete the task of multi-target tracking .
The fusion module uses a labeled dobernuli (Labeled Muti-Bernouli,LMB) Filter implementation , Output a list containing the spatial distribution and probability of existence of the target trajectory ; Tracking layer usage DempsterShafer Methods to realize sensor fusion perception , It can give full play to the advantages of each sensor , Avoid failure due to sensor limitations ; - Summary
The algorithm is proposed , For the future autonomous driving perception system , Its key technology is the ability to replace sensors without changing the core of the fusion system ; Each sensing layer provides a general sensor interface , It can merge additional sensors or replace existing sensors without changing the fusion core of the sensing system .
Its modular structure is helpful for the replacement of sensors , And the sensor independent interface is mapped in the network 、 The application of positioning and tracking module makes it unnecessary to make any adjustment to the fusion algorithm to modify the sensor settings .
2.2 FOP-MOC Model
Chavez-Garcia And so forth FOP-MOC Model , Take the classification information of the target as the key element of sensor fusion , The method based on evidence framework is used as the sensor fusion algorithm , It mainly solves the problem of sensor data association 、 The problem of sensor fusion .
Low level integration in SLAM Module ; The detection layer fuses the target list detected by each sensor ; The tracking layer integrates the track list of each sensor module tracking the target , To generate the final result .FOP-MOC Sensor fusion is carried out at the detection layer to improve the sensing ability of the sensing system .
FOP-MOC Model , The input information of the fusion model is : radar 、 camera 、 List of lidar detection targets , The output result is the target detection information after fusion , And send it to the tracking module ; The detection data of radar and lidar are mainly used for moving target detection , The images collected by the camera are mainly used for target classification , Each target is determined by its location 、 Size 、 The evidence distribution of Category Hypothesis , The category information is the shape from the detection result 、 Relative speed and visual appearance .
边栏推荐
- [conference resources] the Third International Conference on Automation Science and Engineering in 2022 (jcase 2022)
- 测试人进阶技能:单元测试报告应用指南
- Global and Chinese markets for context and location-based services 2022-2028: Research Report on technology, participants, trends, market size and share
- Global and Chinese markets for supply chain strategy and operation consulting 2022-2028: Research Report on technology, participants, trends, market size and share
- Infiltration records of CFS shooting range in the fourth phase of the western regions' Dadu Mansion
- Global and Chinese market of wireless chipsets 2022-2028: Research Report on technology, participants, trends, market size and share
- AIX存储管理之卷组的创建(一)
- Advanced skills of testers: a guide to the application of unit test reports
- Global and Chinese markets for power over Ethernet (POE) solutions 2022-2028: Research Report on technology, participants, trends, market size and share
- 工作中非常重要的测试策略,你大概没注意过吧
猜你喜欢
JMeter做接口测试,如何提取登录Cookie
Entrepreneurship is a little risky. Read the data and do a business analysis
Xinniuniu blind box wechat applet source code_ Support flow realization, with complete material pictures
Collection: comprehensive summary of storage knowledge
Common loss function of deep learning
Day 13 of hcip (relevant contents of BGP agreement)
[IVX junior engineer training course 10 papers] 02 numerical binding and adaptive website production
AIX存储管理之卷组属性的查看和修改(二)
MySQL application day02
【八大排序②】选择排序(选择排序,堆排序)
随机推荐
Daily work and study notes
[WesternCTF2018]shrine writeup
Look at the industrial Internet from a new perspective and seek the correct ways and methods of industrial Internet
Han Zhichao: real time risk control practice of eBay based on graph neural network
What skills does an excellent software tester need to master?
Powerful calendar wechat applet source code - support the main mode of doing more traffic
How does schedulerx help users solve the problem of distributed task scheduling?
How does schedulerx help users solve the problem of distributed task scheduling?
【八大排序①】插入排序(直接插入排序、希尔排序)
教你白嫖Amazon rds一年并搭建MySQL云数据库(只需10分钟,真香)
Global and Chinese market of collaborative applications 2022-2028: Research Report on technology, participants, trends, market size and share
[eight sorting ③] quick sorting (dynamic graph deduction Hoare method, digging method, front and back pointer method)
Picture puzzle wechat applet source code_ Support multi template production and traffic master
Global and Chinese markets of edge AI software 2022-2028: Research Report on technology, participants, trends, market size and share
Creating logical volumes and viewing and modifying attributes for AIX storage management
Study note 2 -- definition and value of high-precision map
Datawhale 社区黑板报(第1期)
The pain of Xiao Sha
ACM tutorial - quick sort (regular + tail recursion + random benchmark)
只是以消费互联网的方式和方法来落地和实践产业互联网,并不能够带来长久的发展