当前位置:网站首页>Multi camera data collection based on Kinect azure (II)

Multi camera data collection based on Kinect azure (II)

2022-06-22 07:02:00 GaryW666

be based on Kinect Azure Multi camera data acquisition ( Two )

     stay be based on Kinect Azure Multi camera data acquisition ( One ) Take the data acquisition of dual cameras as an example , It introduces Kinect Azure The method of multi camera data acquisition , Mainly including data collection , Device synchronization , Equipment calibration , Data fusion has four parts . The depth data of acquisition are given 、 Color data and method for obtaining color point cloud data . This article continues with how to synchronize devices .
     The same as usual , Or put reference links first :
    https://docs.microsoft.com/zh-cn/azure/Kinect-dk/multi-camera-sync
    https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/examples/green_screen/MultiDeviceCapturer.h
     Multi camera data acquisition ultimately wants to synthesize a good point cloud model , Device synchronization is the most basic requirement . So called synchronization , This is to make each device capture data at the same time , Only by synchronizing devices accurately , Finally, the point cloud data among multiple devices can be perfectly integrated . Device synchronization includes intra device synchronization and inter device synchronization , Synchronization in the equipment refers to the synchronization of the depth sensor and the color sensor , The synchronization between them can be realized through the depth_delay_off_color_usec Property to set . The synchronization between devices includes hardware synchronization and software synchronization , Refer to the first connection above for the specific method of hardware synchronization . This article focuses on sharing how to synchronize two devices on the software .

One 、 preparation :

1、 Set the primary and secondary properties of the device ;
2、 Turn on all slave devices first , Restart the main equipment ;( These two points can be found in the last article in this series )
3、 Set the device's subordinate_delay_off_master_usec The attribute is 0;
4、 Set the... Of two devices respectively depth_delay_off_color_usec The attribute is 80 and -80. To prevent mutual interference between depth sensors of multiple devices , Therefore, the capture time between depth sensors should deviate from 160μs Or more ;
5、 For the precise timing of the equipment , It is necessary to manually adjust the exposure time of the color image , Automatic mode will cause synchronized devices to lose synchronization faster . Use k4a_device_set_color_control The function sets the exposure time to manual mode ; Simultaneous use k4a_device_set_color_control The function continues to set the control frequency to manual mode , The parameters I set here are consistent with those in the above linked routine .

Two 、 Implementation method and code

Use the method of adjusting timestamp to obtain device synchronization . The specific implementation ideas are as follows :
1、 The master and slave devices respectively acquire a frame of data , And get its color image ( Depth images can also );
2、 use k4a_image_get_device_timestamp_usec Function to obtain the timestamp of two images respectively ;
3、 Calculate the expected timestamp ;
    Expected timestamp = Master device image timestamp +subordinate_delay_off_master_usec Property value ,
    If you use depth images , Plus depth_delay_off_color_usec Property value )
4、 Compare the timestamp of the slave device image with the desired timestamp :
    1) The difference is less than the opposite of the set threshold , Indicates that the timestamp of the slave device lags behind , Slave device recapture data , And go to step 3;
    2) The difference is greater than the set threshold , Indicates that the timestamp of the master device lags behind , The master device recaptures the data , And go to step 3;
    3) The absolute value of the difference is less than the set threshold , Indicates that the synchronization between the two devices has been completed ;
    When synchronizing with color images , Threshold set to 100μs; When synchronizing with depth images , Threshold set to 260μs.
 Insert picture description here
     thus , The synchronization between the two devices has been completed , But it may also face the risk of going out of step ( But I haven't found it yet ). So for the correctness of the program , It is safer to call the synchronization function once for each frame of data collection . And be careful , The image obtained in the synchronization function must be timely release, Otherwise, the memory will explode when the loop runs !!! Don't ask me how I know that …

原网站

版权声明
本文为[GaryW666]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202220541018224.html