当前位置:网站首页>Realsense d435 d435i d415 depth camera obtains RGB map, left and right infrared camera map, depth map and IMU data under ROS
Realsense d435 d435i d415 depth camera obtains RGB map, left and right infrared camera map, depth map and IMU data under ROS
2022-07-04 10:13:00 【Wildcraner】
First of all, you should make sure that your camera driver is installed , Environment configuration can be seen in another article of mine :https://blog.csdn.net/weixin_46195203/article/details/119205851
** First step :** Create a new file informationread( The recommended file name here is :informationread, In this way, there is no need to modify it separately Makelist The information in the )
** The second step :** stay informationread Create two new files under the folder, which are build and src One more CMakeLists.txt file
** The third step :** stay CMakeLists.txt Copy the code in the file
cmake_minimum_required(VERSION 3.1.0)
project(alldata)
set( CMAKE_CXX_COMPILER "g++" )
set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )
# Enable C++11
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
FIND_PACKAGE(OpenCV REQUIRED)
FIND_PACKAGE(realsense2 REQUIRED)
include_directories( ${
OpenCV_INCLUDE_DIRS} )
include_directories( ${
realsense2_INCLUDE_DIRS})
add_executable(informationread src/GetAlldata.cpp)
target_link_libraries(informationread ${
OpenCV_LIBS} ${
realsense2_LIBRARY})
** Step four :** Copy the following code to a .cpp file Put it in src In file ( It is suggested that cpp The file named :GetAlldata.cpp, In this way, there is no need to modify it separately Makelist The information in the )
#include <librealsense2/rs.hpp>
// include OpenCV header file
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
#define width 640
#define height 480
#define fps 30
int main(int argc, char** argv) try
{
// judge whether devices is exist or not
rs2::context ctx;
auto list = ctx.query_devices(); // Get a snapshot of currently connected devices
if (list.size() == 0)
throw std::runtime_error("No device detected. Is it plugged in?");
rs2::device dev = list.front();
//
rs2::frameset frames;
//Contruct a pipeline which abstracts the device
rs2::pipeline pipe;// Establish a communication pipeline //https://baike.so.com/doc/1559953-1649001.html pipeline The explanation of
//Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg;
//Add desired streams to configuration
cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_BGR8, fps);// Color images
cfg.enable_stream(RS2_STREAM_DEPTH, width, height, RS2_FORMAT_Z16,fps);// Depth image
cfg.enable_stream(RS2_STREAM_INFRARED, 1, width, height, RS2_FORMAT_Y8, fps);// Infrared image 1
cfg.enable_stream(RS2_STREAM_INFRARED, 2, width, height, RS2_FORMAT_Y8, fps);// Infrared image 2
// If you want to get imu Data uncomment the following two lines of code , And modify it imu Assignment
//cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);// gyroscope
//cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);// gyroscope
int imu = 0;// If you want to get imu data , This is set to 1
// get depth scale
// float depth_scale = get_depth_scale(profile.get_device());
// start stream
pipe.start(cfg);// Instructs the pipeline to start the flow with the requested configuration
while(1)
{
frames = pipe.wait_for_frames();// Wait for all configured flow generation frameworks
// Align to depth
rs2::align align_to_depth(RS2_STREAM_DEPTH);
frames = align_to_depth.process(frames);
// Get imu data
if(imu){
if (rs2::motion_frame accel_frame = frames.first_or_default(RS2_STREAM_ACCEL))
{
rs2_vector accel_sample = accel_frame.get_motion_data();
std::cout << "Accel:" << accel_sample.x << ", " << accel_sample.y << ", " << accel_sample.z << std::endl;
}
if (rs2::motion_frame gyro_frame = frames.first_or_default(RS2_STREAM_GYRO))
{
rs2_vector gyro_sample = gyro_frame.get_motion_data();
std::cout << "Gyro:" << gyro_sample.x << ", " << gyro_sample.y << ", " << gyro_sample.z << std::endl;
}
}
//Get each frame
rs2::frame color_frame = frames.get_color_frame();
rs2::frame depth_frame = frames.get_depth_frame();
rs2::video_frame ir_frame_left = frames.get_infrared_frame(1);
rs2::video_frame ir_frame_right = frames.get_infrared_frame(2);
// Creating OpenCV Matrix from a color image
Mat color(Size(width, height), CV_8UC3, (void*)color_frame.get_data(), Mat::AUTO_STEP);
Mat pic_right(Size(width,height), CV_8UC1, (void*)ir_frame_right.get_data());
Mat pic_left(Size(width,height), CV_8UC1, (void*)ir_frame_left.get_data());
Mat pic_depth(Size(width,height), CV_16U, (void*)depth_frame.get_data(), Mat::AUTO_STEP);
// Display in a GUI
namedWindow("Display Image", WINDOW_AUTOSIZE );
imshow("Display Image", color);
waitKey(1);
imshow("Display depth", pic_depth*15);
waitKey(1);
imshow("Display pic_left", pic_left);
waitKey(1);
imshow("Display pic_right",pic_right);
waitKey(1);
}
return 0;
}
// error
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
The default in the code is D415 perhaps D435 This one doesn't imu The camera has no output imu Information
If you want imu Information Follow the instructions to make minor modifications .
** Step four :** stay build Open the command window in the file and enter
cmake ..
make -j4
./informationread
Now you can see the real-time running results
边栏推荐
- 7-17 crawling worms (15 points)
- Hands on deep learning (45) -- bundle search
- 【OpenCV 例程200篇】218. 多行倾斜文字水印
- Advanced technology management - how to design and follow up the performance of students at different levels
- El Table Radio select and hide the select all box
- Hands on deep learning (40) -- short and long term memory network (LSTM)
- 回复评论的sql
- Golang Modules
- Laravel文档阅读笔记-How to use @auth and @guest directives in Laravel
- Go context 基本介绍
猜你喜欢
直方图均衡化
C language pointer interview question - the second bullet
Hands on deep learning (44) -- seq2seq principle and Implementation
Hands on deep learning (42) -- bi-directional recurrent neural network (BI RNN)
How can people not love the amazing design of XXL job
Doris / Clickhouse / Hudi, a phased summary in June
Hands on deep learning (41) -- Deep recurrent neural network (deep RNN)
Hands on deep learning (46) -- attention mechanism
Histogram equalization
PHP代码审计3—系统重装漏洞
随机推荐
On Multus CNI
C语言指针面试题——第二弹
Use C to extract all text in PDF files (support.Net core)
lolcat
leetcode1-3
Hands on deep learning (33) -- style transfer
Web端自动化测试失败原因汇总
2021-08-11 function pointer
uniapp 小于1000 按原数字显示 超过1000 数字换算成10w+ 1.3k+ 显示
使用 C# 提取 PDF 文件中的所有文字(支持 .NET Core)
Exercise 7-3 store the numbers in the array in reverse order (20 points)
The time difference between the past time and the present time of uniapp processing, such as just, a few minutes ago, a few hours ago, a few months ago
What is devsecops? Definitions, processes, frameworks and best practices for 2022
Hands on deep learning (38) -- realize RNN from scratch
Does any teacher know how to inherit richsourcefunction custom reading Mysql to do increment?
Kotlin 集合操作汇总
Log cannot be recorded after log4net is deployed to the server
Safety reinforcement learning based on linear function approximation safe RL with linear function approximation translation 2
【FAQ】华为帐号服务报错 907135701的常见原因总结和解决方法
7-17 crawling worms (15 points)