[email protected] You know Source Editor 3D Visual workshop Introduce that we have been RA L An article accepted . This article is not a...">

当前位置:网站首页>In air operation, only distance mapping is used to robustly locate occluded targets (ral2022)

In air operation, only distance mapping is used to robustly locate occluded targets (ral2022)

2022-06-21 19:05:00 3D vision workshop

The author 丨 [email protected] You know

Source https://zhuanlan.zhihu.com/p/457168226

Editor 3D Visual workshop

Introduce that we have been RA-L An article accepted . This article is not about SLAM, But and SLAM Is closely related to the , It's still interesting . Only the general idea is introduced here , Please refer to the thesis for details .

Thesis link :https://shiyuzhao.westlake.edu.cn/style/2022RAL_XiaoyuZhang.pdf

Video link :https://www.youtube.com/watch?v=t6-0zaRuFIY;

https://www.bilibili.com/video/BV11T4y1m7tN?share_source=copy_web&vd_source=534ab035a008ff26525503ac9b890e83

background

stay SLAM in , We want to implement sensors ( robot ) Their own position , Therefore, it is necessary to establish an environment map based on geometric features such as spatial points ; The position and attitude state of the sensor is solved by the geometric constraint relationship between the map feature and the sensor observation .

And in this work , We hope to achieve the positioning of our goals , That is to obtain the three-dimensional position of the target point relative to the sensor coordinate system . The background of this work is the aerial robot , The occlusion of the target 、 There are many problems such as loss . Therefore, we hope to achieve the location of occluded objects , That is to say Even if you don't see the target , You can also know its relative position .

This problem can also be solved by SLAM Method implementation of , For example, establish the target point in the environment map , The relative position to the camera can be easily calculated . But the accuracy of this method will be affected by the position accuracy of the target point , Camera positioning accuracy and other factors .

In this paper , We hope that the positioning of the target does not depend on the self positioning of the camera . therefore , We imitate SLAM A new target location method is proposed , To this end, we have established a new map form , The final goal is to calculate the relative position of the target point , In this process, the camera is not positioned .

Method

This method is based on RGB-D Camera implementation , It can also be extended to binocular cameras . The target position is calculated and saved in the form of points , It needs to be given in the first frame of the image . In the implementation , We use feature points to represent the target points , Therefore, the observed target points in this paper refer to the successful matching with the target points .

The core algorithm of target location is actually relatively simple , Namely When the target point is observed , Save the spatial relationship between the target point and the surrounding feature points ; When no target point is observed , The position of the target point is calculated by using the spatial relationship between the matched feature point and the target point . The spatial relationship here can take different forms , For example, direction vector, etc . But we chose distance . Because the distance is independent of the coordinate system , Therefore, it is not necessary to know the pose of the camera in the process of target positioning . Distance based target location is also relatively simple , Be similar to GPS Equal three-point positioning ( May refer to :iBeacon location - Three point positioning implementation ). In space , Four points that are not in the same plane are known , And their distance to the target point , The unique target point position can be calculated .

In particular , We extract... From each frame of the image ORB Characteristic point , The depth value can be calculated by depth map or binocular matching , Therefore, the three-dimensional coordinates of the feature points in the camera coordinate system can be obtained :

78164c7b1e29675205effdae39b3272b.png

Do the same for the target point , Therefore, the distance between the target point and the feature point can be calculated in one frame of image :

b8d4369169d7f64fed89bb337b848b1d.png

fbae208ea2e916570a79a265eb4391d3.png

System

According to the above method , We imitate SLAM Build a target positioning system :

3ca456cf764ce8acc73e15bb6c927b23.png

pipeline

You can see it , Here with SLAM It's very similar , It is also divided into Drawing and location Two parts , But the meaning is different . Under construction , We only save the distance from the feature point to the target point , and Do not save the 3D coordinates of the point , In the paper , The map is called target-centered range-only map. Positioning , The realization is to solve the three-dimensional position of the target point in the current camera coordinate system , The core is to build the above optimization equation , The difficulty is feature point matching .

Modelled on the SLAM System , In the feature point matching, it also adopts the method of matching with the previous frame , Multiple processes such as matching in local maps . The result of the previous frame is used as the initial value when calculating and solving . Build new map points from keyframes , Update previous map points, etc . Other implementation details can be found in the paper , Basically, and SLAM The system has one-to-one correspondence , There is only one loopback detection , In fact, you can also add ( But I'm lazy ). The whole system is not complicated , Especially for those who are familiar with SLAM For my classmates , So I won't repeat it here .

experiment

Two sets of data were selected for the experiment , A group came from ICL-NUIM Data sets , Because it can provide the true value of the target point ; The other group is for our own use realsense Collected data , And use VICON Provides the truth value . In two experiments , We are all with SLAM(ORB-SLAM3) The accuracy is compared .

5b9f956d7cf60d49f9247a7206e710ed.png

One set of experimental data is shown in the figure above , Mark the bottle cap as the target in the first frame , As the UAV moves , The target will be blocked by the robot arm , Or remove the camera picture , But throughout the process , Our methods all provide the target location . The positioning error is shown in the figure below , The blue curve indicates whether the target point is matched , The color change curve is the positioning error , It can be seen that whether the target is blocked or not , The target can be located at , And the error is almost constant . There is a small segment that fails due to feature matching and other reasons , But it can be recovered later .

daa267e663307abac3286fc140ac0c8f.png

ea54b0f4663a059ec746b441b72c139a.png

60afc9732be876de6b9d8f7cedbaaf30.png014b96c335d747405452d76a4a294b3a.png

SLAM The error in the method is large , In addition to cumulative errors, etc . I think it's because ,SLAM Is to locate all feature points 、 The camera pose is put into an optimization equation to solve a result with the minimum overall error , But this does not mean that one of the errors ( Target location ) It's going to get smaller , It may even get bigger . For example, the following figure shows in ICL The result of a certain location in the dataset , Use SLAM When the method is used , The position of the target point has changed greatly after a local optimization , The error increases immediately .

All in all , Use SLAM Method positioning , The position of the target point will be affected by the pose of the camera , The influence of various errors such as the position of other feature points . Our method hopes to make the target positioning accuracy not affected by these additional quantities , And it can deal with the target occlusion 、 Problems such as loss .

6a12dd7662d2b33c15e6abd1c4ab1234.png

This article is only for academic sharing , If there is any infringement , Please contact to delete .

3D Visual workshop boutique course official website :3dcver.com

1. Multi sensor data fusion technology for automatic driving field

2. For the field of automatic driving 3D Whole stack learning route of point cloud target detection !( Single mode + Multimodal / data + Code )
3. Thoroughly understand the visual three-dimensional reconstruction : Principle analysis 、 Code explanation 、 Optimization and improvement
4. China's first point cloud processing course for industrial practice
5. laser - Vision -IMU-GPS The fusion SLAM Algorithm sorting and code explanation
6. Thoroughly understand the vision - inertia SLAM: be based on VINS-Fusion The class officially started
7. Thoroughly understand based on LOAM Framework of the 3D laser SLAM: Source code analysis to algorithm optimization
8. Thorough analysis of indoor 、 Outdoor laser SLAM Key algorithm principle 、 Code and actual combat (cartographer+LOAM +LIO-SAM)

9. Build a set of structured light from zero 3D Rebuild the system [ theory + Source code + practice ]

10. Monocular depth estimation method : Algorithm sorting and code implementation

11. Deployment of deep learning model in autopilot

12. Camera model and calibration ( Monocular + Binocular + fisheye )

13. blockbuster ! Four rotor aircraft : Algorithm and practice

14.ROS2 From entry to mastery : Theory and practice

15. The first one in China 3D Defect detection tutorial : theory 、 Source code and actual combat

blockbuster !3DCVer- Academic paper writing contribution   Communication group Established

Scan the code to add a little assistant wechat , can Apply to join 3D Visual workshop - Academic paper writing and contribution   WeChat ac group , The purpose is to communicate with each other 、 Top issue 、SCI、EI And so on .

meanwhile You can also apply to join our subdivided direction communication group , At present, there are mainly 3D Vision CV& Deep learning SLAM Three dimensional reconstruction Point cloud post processing Autopilot 、 Multi-sensor fusion 、CV introduction 、 Three dimensional measurement 、VR/AR、3D Face recognition 、 Medical imaging 、 defect detection 、 Pedestrian recognition 、 Target tracking 、 Visual products landing 、 The visual contest 、 License plate recognition 、 Hardware selection 、 Academic exchange 、 Job exchange 、ORB-SLAM Series source code exchange 、 Depth estimation Wait for wechat group .

Be sure to note : Research direction + School / company + nickname , for example :”3D Vision  + Shanghai Jiaotong University + quietly “. Please note... According to the format , Can be quickly passed and invited into the group . Original contribution Please also contact .

347c316b1c2504e001fffbc108e95b81.png

▲ Long press and add wechat group or contribute

82052899c05bd59c07a0ceddad8e32b1.png

▲ The official account of long click attention

3D Vision goes from entry to mastery of knowledge : in the light of 3D In the field of vision Video Course cheng ( 3D reconstruction series 3D point cloud series Structured light series Hand eye calibration Camera calibration laser / Vision SLAM Automatically Driving, etc )、 Summary of knowledge points 、 Introduction advanced learning route 、 newest paper Share 、 Question answer Carry out deep cultivation in five aspects , There are also algorithm engineers from various large factories to provide technical guidance . meanwhile , The planet will be jointly released by well-known enterprises 3D Vision related algorithm development positions and project docking information , Create a set of technology and employment as one of the iron fans gathering area , near 4000 Planet members create better AI The world is making progress together , Knowledge planet portal :

Study 3D Visual core technology , Scan to see the introduction ,3 Unconditional refund within days

9b595c3ec5d298b1eeee613794b089b0.png

  There are high quality tutorial materials in the circle 、 Answer questions and solve doubts 、 Help you solve problems efficiently

Feel useful , Please give me a compliment ~  

原网站

版权声明
本文为[3D vision workshop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/172/202206211728267276.html