Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROS2] Intra-process-comm support in CameraPublisher #212

Closed
clydemcqueen opened this issue Nov 6, 2021 · 11 comments
Closed

[ROS2] Intra-process-comm support in CameraPublisher #212

clydemcqueen opened this issue Nov 6, 2021 · 11 comments

Comments

@clydemcqueen
Copy link

clydemcqueen commented Nov 6, 2021

Any plans to support zero-copy IPC in CameraPublisher? I think this means adding publish methods in Publisher and CameraPublisher that take unique_ptrs. I'd be willing to look into this if there is interest.

@fhwedel-hoe
Copy link

I would really like to see this. Working with a high-speed camera and a relatively weak system, the copying eats quite a lot of CPU as discussed here.

@ZhenshengLee
Copy link

Take a look at https://github.com/ZhenshengLee/ros2_shm_msgs , and you'll find it easy to use shm_msgs like cv_bridge

the publisher is like that

class Talker : public rclcpp::Node {
private:
  using Topic = shm_msgs::msg::Image1m;

public:
  explicit Talker(const rclcpp::NodeOptions &options)
      : Node("shm_image1m_talker", options) {

    m_input_cvimage->image = cv::imread("./res/img/205_182.png");
    m_input_cvimage->header.frame_id = "camera_link";
    m_input_cvimage->encoding = "bgr8";
    // cv::imshow("input image", m_input_cvimage->image);
    // cv::waitKey(0);

    auto publishMessage = [this]() -> void {

      auto loanedMsg = m_publisher->borrow_loaned_message();

      populateLoanedMessage(loanedMsg);

      m_publisher->publish(std::move(loanedMsg));
      // We gave up ownership and loanedMsg is not supposed to be accessed
      // anymore

      m_count++;
    };

    rclcpp::QoS qos(rclcpp::KeepLast(10));
    m_publisher = this->create_publisher<Topic>("shm_image_1m", qos);

    // Use a timer to schedule periodic message publishing.
    m_timer = this->create_wall_timer(0.1s, publishMessage);
  }

private:
  uint64_t m_count = 1;
  rclcpp::Publisher<Topic>::SharedPtr m_publisher;
  rclcpp::TimerBase::SharedPtr m_timer;
  std::shared_ptr<shm_msgs::CvImage> m_input_cvimage{std::make_shared<shm_msgs::CvImage>()};

  void populateLoanedMessage(rclcpp::LoanedMessage<Topic> &loanedMsg) {
    Topic &msg = loanedMsg.get();

    m_input_cvimage->header.stamp = now();

    m_input_cvimage->toImageMsg(msg);

    RCLCPP_INFO(this->get_logger(), "Publishing ");
  }
};

@fhwedel-hoe
Copy link

That is a solid approach. Thank you.

Unfortunately, for my specific use-case, it does not resolve all issues:

  • relies on populating the message via toImageMsg, which does a memcpy on the image data – for me, this is the bottleneck
  • it is incompatible with the standard implementation of CameraPublisher or publisher<sensor_msgs::msg::Image> – standard tools like rqt will not be able to display the image

For these reasons, I still think I need to wait until the official CameraPublisher supports IPC.

@ZhenshengLee
Copy link

@fhwedel-hoe thanks for your quick comment!

  • relies on populating the message via toImageMsg, which does a memcpy on the image data – for me, this is the bottleneck

I am sorry for not avoiding all copys in the whole chain of pub and sub.

You are right about this open problem with rclcpp api of zero-copy, and I created an issue to track this MatthiasKillat/ros2_shm_vision_demo#12

it is incompatible with the standard implementation of CameraPublisher or publisher<sensor_msgs::msg::Image> – standard tools like rqt will not be able to display the image

well, you can use the shm_bridge shm_image2m_bridge to convert the shm_msgs to sensor_msgs, then you can visualize the msg in rviz.

# configure topic remapping
ros2 launch shm_msgs shm_image_bridge.launch.py

For these reasons, I still think I need to wait until the official CameraPublisher supports IPC.

Great, what's the current consideration? Is there any documentations?

Thanks.

@ZhenshengLee
Copy link

@fhwedel-hoe

  • relies on populating the message via toImageMsg, which does a memcpy on the image data – for me, this is the bottleneck

I'd like to show you that from the doc of apex.ai team, the memcpy in the loaned_msg construction is inevitable in current ros2 api design. But the performance still gets improvment due to the mimimum copy.

image

read this doc https://github.com/ZhenshengLee/ros2_shm_msgs/blob/master/doc/Using_Zero_Copy_In_ROS2.pdf for more info.

@ZhenshengLee
Copy link

@fhwedel-hoe

I also found an idea from the doc of eCAL, another middleware for AV from continental and eclipse, that says.

Even though it is called zero-copy, only the subscribers are zero-copy. Publishers still have to copy the data into the memory file, as they have to also support other layers like UDP or TCP and therefore cannot directly work on the memory file.

I this this idea will fit for all middlewares. So,

  1. in the process of zero-copy, there must be someone that will do the data transmission, which is a memcpy.
  2. due to compatibility consideration, there will be limitations when using zero-copy transport.

@fhwedel-hoe
Copy link

fhwedel-hoe commented Jul 4, 2022

@ZhenshengLee
As far as I understand, the standard implementation of publish, ros2 already does support efficient IPC in some circumstances (e.g. when processes are launched in a composite node and already share memory). I tried a tutorial and it was working with sensor_msgs::msg::Image, too. In ROS 2 foxy, efficient IPC was enabled out of the box. I did not even need to do any adjustments. However, since the publisher takes ownership of the message, this works only once per message. In the publishing node, memory allocation and data assignment must happen explicitly.

Maybe I am misunderstanding the idea of "zero copy". I see that we cannot have zero-copy message-passing for inter-machine communication, obviously. I do not expect inter-process message-passing to be zero-copy. However, within one process, there should be a method of sharing data without creating any copy. Not even one. That is what I would call "zero-copy". The publisher needs to maintain ownership. Subscribers can receive immutable "const" messages only. Received messages might be valid for a limited time only. Obviously, it would be a trade-off between speed and safety.

Maybe I misunderstood the goal of ros2_shm_vision_demo. Does it focus on inter-machine, inter-process or intra-process communication?

@ZhenshengLee
Copy link

@fhwedel-hoe

First of all, when I say IPC, I mean inter-process-comm(socket, shm(minimum copy), zero-copy), intra-process-comm is not included.

ros2 already does support efficient IPC in some circumstances (e.g. when processes are launched in a composite node and already share memory)

Intra-process-zero-copy happens with composition nodes AND rclcpp intra process comm enabled, see this example

In ROS 2 foxy, efficient IPC was enabled out of the box.

Yes, if you mean the shm transport layer provided by FastDDS, it really depends on the middleware layer, but in the rclcpp layer there will be copys. So this will be called minimum copy. the performance can be better than socket transport because many copys already been avoided.

However, within one process, there should be a method of sharing data without creating any copy. Not even one. That is what I would call "zero-copy". The publisher needs to maintain ownership. Subscribers can receive immutable "const" messages only. Received messages might be valid for a limited time only. Obviously, it would be a trade-off between speed and safety.

Oh, this is what you really need, an intra-process shared pointer.

Maybe I misunderstood the goal of ros2_shm_vision_demo. Does it focus on inter-machine, inter-process or intra-process communication?

the repo ros2_shm_msgs and ros2_shm_vision_demo focus on inter-machine-communication with rclcpp loaned_api described in the design article of ros2 zero copy via loaned_api

@fhwedel-hoe
Copy link

fhwedel-hoe commented Jul 4, 2022

when I say IPC, I mean inter-process-comm(socket, shm(minimum copy), zero-copy), intra-process-comm is not included.
(…)
the repo ros2_shm_msgs and ros2_shm_vision_demo focus on inter-machine-communication with rclcpp

Now I understand. Thank you for the clarification. :) The English readme at ros2_shm_msgs says:

This package provides many ros2 message definitions that have a good support to true zero copy transportation in IPC (in a single machine) context.

The description "true zero copy" and "in a single machine" got me confused. I am sorry.

Oh, this is what you really need, an intra-process shared pointer.

Indeed. If I there was an option of loaning a message once and then re-use the allocated memory multiple times, that would be ideal. Sometimes I resort to not using ROS for parts of the application. I hope some day this limitation will be overcome. I now understand that ros2_shm_msgs is not what I was looking for. For inter-machine-communication, ros2_shm_msgs is looking good. :)

@ZhenshengLee
Copy link

Oh, this is what you really need, an intra-process shared pointer.

Indeed. If I there was an option of loaning a message once and then re-use the allocated memory multiple times, that would be ideal.

The feature is not implemented in this repo, but you can refer to the repo ros2_v4l2_camera for intra-process-comm, especially with ca908c35cdf64680ce7e36e1d8b2eeff4710fd7e and commits around it.

The description "true zero copy" and "in a single machine" got me confused. I am sorry.

I will refactor the readme in the future.

No I can see this issue is different with #216

May be we can change the issue name from IPC Support to Intra-process-comm support?

@clydemcqueen clydemcqueen changed the title [ROS2] IPC support in CameraPublisher [ROS2] Intra-process-comm support in CameraPublisher Dec 3, 2022
@ahcorde
Copy link
Collaborator

ahcorde commented May 31, 2024

This is already supported thank to this PR #306.

Feel free to reopen the issue or create a new one if there are some issues

@ahcorde ahcorde closed this as completed May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants