当前位置:网站首页>Realsense d435 d435i d415 depth camera obtains RGB map, left and right infrared camera map, depth map and IMU data under ROS
Realsense d435 d435i d415 depth camera obtains RGB map, left and right infrared camera map, depth map and IMU data under ROS
2022-07-04 10:13:00 【Wildcraner】
First of all, you should make sure that your camera driver is installed , Environment configuration can be seen in another article of mine :https://blog.csdn.net/weixin_46195203/article/details/119205851
** First step :** Create a new file informationread( The recommended file name here is :informationread, In this way, there is no need to modify it separately Makelist The information in the )
** The second step :** stay informationread Create two new files under the folder, which are build and src One more CMakeLists.txt file
** The third step :** stay CMakeLists.txt Copy the code in the file
cmake_minimum_required(VERSION 3.1.0)
project(alldata)
set( CMAKE_CXX_COMPILER "g++" )
set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )
# Enable C++11
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
FIND_PACKAGE(OpenCV REQUIRED)
FIND_PACKAGE(realsense2 REQUIRED)
include_directories( ${
OpenCV_INCLUDE_DIRS} )
include_directories( ${
realsense2_INCLUDE_DIRS})
add_executable(informationread src/GetAlldata.cpp)
target_link_libraries(informationread ${
OpenCV_LIBS} ${
realsense2_LIBRARY})
** Step four :** Copy the following code to a .cpp file Put it in src In file ( It is suggested that cpp The file named :GetAlldata.cpp, In this way, there is no need to modify it separately Makelist The information in the )
#include <librealsense2/rs.hpp>
// include OpenCV header file
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
#define width 640
#define height 480
#define fps 30
int main(int argc, char** argv) try
{
// judge whether devices is exist or not
rs2::context ctx;
auto list = ctx.query_devices(); // Get a snapshot of currently connected devices
if (list.size() == 0)
throw std::runtime_error("No device detected. Is it plugged in?");
rs2::device dev = list.front();
//
rs2::frameset frames;
//Contruct a pipeline which abstracts the device
rs2::pipeline pipe;// Establish a communication pipeline //https://baike.so.com/doc/1559953-1649001.html pipeline The explanation of
//Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg;
//Add desired streams to configuration
cfg.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_BGR8, fps);// Color images
cfg.enable_stream(RS2_STREAM_DEPTH, width, height, RS2_FORMAT_Z16,fps);// Depth image
cfg.enable_stream(RS2_STREAM_INFRARED, 1, width, height, RS2_FORMAT_Y8, fps);// Infrared image 1
cfg.enable_stream(RS2_STREAM_INFRARED, 2, width, height, RS2_FORMAT_Y8, fps);// Infrared image 2
// If you want to get imu Data uncomment the following two lines of code , And modify it imu Assignment
//cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);// gyroscope
//cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);// gyroscope
int imu = 0;// If you want to get imu data , This is set to 1
// get depth scale
// float depth_scale = get_depth_scale(profile.get_device());
// start stream
pipe.start(cfg);// Instructs the pipeline to start the flow with the requested configuration
while(1)
{
frames = pipe.wait_for_frames();// Wait for all configured flow generation frameworks
// Align to depth
rs2::align align_to_depth(RS2_STREAM_DEPTH);
frames = align_to_depth.process(frames);
// Get imu data
if(imu){
if (rs2::motion_frame accel_frame = frames.first_or_default(RS2_STREAM_ACCEL))
{
rs2_vector accel_sample = accel_frame.get_motion_data();
std::cout << "Accel:" << accel_sample.x << ", " << accel_sample.y << ", " << accel_sample.z << std::endl;
}
if (rs2::motion_frame gyro_frame = frames.first_or_default(RS2_STREAM_GYRO))
{
rs2_vector gyro_sample = gyro_frame.get_motion_data();
std::cout << "Gyro:" << gyro_sample.x << ", " << gyro_sample.y << ", " << gyro_sample.z << std::endl;
}
}
//Get each frame
rs2::frame color_frame = frames.get_color_frame();
rs2::frame depth_frame = frames.get_depth_frame();
rs2::video_frame ir_frame_left = frames.get_infrared_frame(1);
rs2::video_frame ir_frame_right = frames.get_infrared_frame(2);
// Creating OpenCV Matrix from a color image
Mat color(Size(width, height), CV_8UC3, (void*)color_frame.get_data(), Mat::AUTO_STEP);
Mat pic_right(Size(width,height), CV_8UC1, (void*)ir_frame_right.get_data());
Mat pic_left(Size(width,height), CV_8UC1, (void*)ir_frame_left.get_data());
Mat pic_depth(Size(width,height), CV_16U, (void*)depth_frame.get_data(), Mat::AUTO_STEP);
// Display in a GUI
namedWindow("Display Image", WINDOW_AUTOSIZE );
imshow("Display Image", color);
waitKey(1);
imshow("Display depth", pic_depth*15);
waitKey(1);
imshow("Display pic_left", pic_left);
waitKey(1);
imshow("Display pic_right",pic_right);
waitKey(1);
}
return 0;
}
// error
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
The default in the code is D415 perhaps D435 This one doesn't imu The camera has no output imu Information
If you want imu Information Follow the instructions to make minor modifications .
** Step four :** stay build Open the command window in the file and enter
cmake ..
make -j4
./informationread
Now you can see the real-time running results
边栏推荐
- QTreeView+自定义Model实现示例
- 用数据告诉你高考最难的省份是哪里!
- El Table Radio select and hide the select all box
- 有老师知道 继承RichSourceFunction自定义读mysql怎么做增量吗?
- C language pointer interview question - the second bullet
- The time difference between the past time and the present time of uniapp processing, such as just, a few minutes ago, a few hours ago, a few months ago
- Exercise 7-8 converting strings to decimal integers (15 points)
- Exercise 7-4 find out the elements that are not common to two arrays (20 points)
- uniapp 小于1000 按原数字显示 超过1000 数字换算成10w+ 1.3k+ 显示
- 7-17 crawling worms (15 points)
猜你喜欢
【FAQ】华为帐号服务报错 907135701的常见原因总结和解决方法
How can people not love the amazing design of XXL job
Machine learning -- neural network (IV): BP neural network
【Day1】 deep-learning-basics
H5 audio tag custom style modification and adding playback control events
Nuxt reports an error: render function or template not defined in component: anonymous
Reprint: summation formula of proportional series and its derivation process
Reasons and solutions for the 8-hour difference in mongodb data date display
转载:等比数列的求和公式,及其推导过程
uniapp 小于1000 按原数字显示 超过1000 数字换算成10w+ 1.3k+ 显示
随机推荐
Hands on deep learning (44) -- seq2seq principle and Implementation
Machine learning -- neural network (IV): BP neural network
[200 opencv routines] 218 Multi line italic text watermark
自动化的优点有哪些?
lolcat
Log cannot be recorded after log4net is deployed to the server
Exercise 9-1 time conversion (15 points)
Hands on deep learning (33) -- style transfer
The time difference between the past time and the present time of uniapp processing, such as just, a few minutes ago, a few hours ago, a few months ago
Reprint: summation formula of proportional series and its derivation process
Hands on deep learning (43) -- machine translation and its data construction
Application of safety monitoring in zhizhilu Denggan reservoir area
Histogram equalization
Advanced technology management - how to design and follow up the performance of students at different levels
Hands on deep learning (39) -- gating cycle unit Gru
Kotlin set operation summary
有老师知道 继承RichSourceFunction自定义读mysql怎么做增量吗?
转载:等比数列的求和公式,及其推导过程
leetcode1-3
Hands on deep learning (32) -- fully connected convolutional neural network FCN