-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get multiple pointcloud mapped with infra1 and infra2? #3271
Comments
Hi @DY-JANG0812 The Infrared 2 topic would be unavailable if the ROS launch was detecting your camera as being on a USB 2.1 connection, even if it was plugged into a USB 3.0 port. This is because Infrared 2 is only supported on a USB 3 connection. You may be able to achieve satisfactory results (though generating a single pointcloud instead of one for each infrared sensor) simply by enabling the pointcloud filter by adding pointcloud.enable:=true to your launch instruction though. For example:
There is no need for you to do manual work to calibrate the two infrared topics to depth. A RealSense depth frame is generated inside the camera hardware from raw left and right infrared frames (not the Infra2 and Infra2 topics) before the frames are even sent along the USB cable to the computer. Also, the left infrared camera has the benefit of always being pixel-perfect aligned, calibrated, and overlapped with the depth map, and also perfectly time-synchronized. |
Thank you for the quick reply. It’s great news for me that the image obtained from the left infrared camera is aligned to the depth coordinate system. Now, I have one more question: If I use the right infrared image without aligning it to the depth coordinate system, does that mean the point cloud mapped to the right IR image might have a coordinate system that is different from the actual right IR image? |
@MartyG-RealSense When referring to the left side, is it from the perspective of facing the camera from the front, or from looking at the back of the camera? |
Because the 0,0,0 origin point of depth is the center-line of the left infrared sensor, when depth is aligned to infrared the origin point will still be the left infrared sensor and so the same coordinate system will be used. When depth is aligned to color the origin of depth changes to the center-line of the RGB sensor. The camera uses the perspective of looking forwards from behind the back of the camera. This is why when looking at the camera from the front, the left infrared sensor is on the right-side of the camera. |
I'm currently using a RealSense camera for obstacle detection, but I'm having trouble extracting correct depth points(NOISE) from acrylic surfaces. I wanted to use infrared textures to remove incorrect depth points, but the issue seems to be that when the acrylic is warped or when infrared light is strongly detected or NOT reflected to camera, the depth calculation is incorrect because the light is not properly detected. I'm considering using filters like hole filling filtering or toggling the emitter on/off. Is there an internal feature that ensures only the filled depth points are outputted during hole filling, instead of noise points? |
As you are using ROS, the best way to strengthen the checking of depth values for confidence in their accuracy and excluding values with low confidence in their accuracy would likely be to load in a json camera configuration file (such as 'high_accuracy') using the ROS parameter json_file_path, as discussed at #2445 I would recommend 'medium_density' over the high_accuracy preset file, as medium_density provides a good balance between accuracy and the amount of detail on the depth image, whilst high_density tends to over-strip the depth detail when eliminating low-confidence values and therefore leaving the image looking sparse. |
Issue Description
Hello, I am writing this because I have a question.
I am using RealSense and the ROS wrapper to get two point clouds from a single frame, each mapped to two infrared 1 ,2 images.
However, it doesn’t work.
I was able to extract two depth streams successfully using the pointcloud filter, but both point clouds are mapped to the infrared image from sensor 2.
An error message saying "(Infrared, 0) sensor isn't supported by current device! -- Skipping..." appears, but the topic /camera0/infra1/image_rect_raw is being published and works perfectly fine.
I have searched through several issues in the repository, but even though I am using USB 3.0, I couldn’t find anyone experiencing a similar issue.
Is it not possible to use both infrared streams (either due to software or hardware limitations)?
If you know of a good way to achieve this, I would greatly appreciate your help (Directly calibrating the two image topics with depth would require too much time and effort...).
The text was updated successfully, but these errors were encountered: