-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable intra-process ,Only use inter-process communication #523
Comments
Interesting idea. We did not think about something like this. Can you tell something about the use case for such a feature. Currently, this is unfortunately not easily possible. A workaround could be to use the user-header feature to publish the PID and on the subscriber side drop all samples with the PID of the same process. |
@smileghp Currently this does not work, but you are able to identify the origin of a sample and can discard it. The publisher has a unique id that is stored in every samples header. You can acquire it via let publisher_id = publisher.publisher_id(); On the subscriber side you can acquire the publisher id in the same fashion and discard the sample when it comes from the local publisher while let Some(sample) = subscriber.receive()? {
if publisher_id == sample.header().publisher_id() {
continue; // do not handle the sample
}
} |
@elBoberido performance optimization is also one of my job responsibilities. For intra-process communication, sometimes passing object pointers of data entities is more efficient, thereby avoiding unnecessary serialization processes. In my own project, I operate this way, so I need to disable the intra-process functionality of IOX2." |
@elfenpiff Thank you very much. Before the new version comes out, I can try the method you mentioned, but this will waste some performance in data transmission |
Yes, in DDS, it is implemented by identifying the recipient's PID in transmit process, rather than filtering only at the time of subscription. I think this approach is more efficient." |
This exactly what is happening with iceoryx2 under the hood. Also, you do not need serialization when using zero-copy communication. We already provide containers that are shared-memory compatible, see this example that introduces the When a publisher delivers a message to a subscriber, it iterates over a vector and copies a pointer (8 byte) into the receiver buffer of the subscriber. I am a bit skeptical if a filter on the publisher side is more efficient than on the subscriber side - maybe it is even slower. The reason might be that the publisher always has to check the filter for all subscribers, but on the subscriber's side, you only need to activate it when you explicitly need it. However, when you combine this with events and wake up the other process/thread just so that it can filter out the sample, then it will cost you a lot of performance.
For cyclone dds for instance I know the details. It is far more efficient since it than utilizes a zero-copy behavior where just a pointer to the payload is shared (the are using classic iceoryx for this). This is far more efficient but this is how zero-copy works so such kind of optimizations will maybe gain you nothing when you already use a zero-copy framework. zero-copy communication in iceoryx2Here is a brief overview how zero-copy in iceoryx2 works:
Since both applications share the actual memory you do not need serialization if your data types are shared memory compatible. network communicationHere is a comparison to network communication - I marked the (high runtime cost). I ignore here the serialization/deserialization steps (that are additional huge bottlenecks) and just assume we want to transmit something like an array of
Summary
So my recommendation would be to look into the complex data types example since I think the biggest performance gain can be made by getting rid of serialization. Additionally, check if you utilize the iceoryx2 API efficiently by using |
If you want to know how expensive even a single copy is you can use our benchmark that comes with a With copy and a payload size of 40960 bytes: cargo run --bin benchmark-publish-subscribe --release -- --bench-all -p 40960 --send-copy
# iceoryx2::service::ipc::Service ::: Iterations: 10000000, Time: 9.739591612, Latency: 486 ns, Sample Size: 40960 Without copy: cargo run --bin benchmark-publish-subscribe --release -- --bench-all -p 40960 --send-copy
# iceoryx2::service::ipc::Service ::: Iterations: 10000000, Time: 2.7868404509999998, Latency: 139 ns, Sample Size: 40960 I measured both things on my laptop. So a unnecessary copy of 40kb increases the latency by a factor of 4. |
Due to the specific application scenario, I only want to use inter-process communication and not intra-process communication. For example, within the same process, there is a writer_1 and a reader_1, while in another process, there is a reader_2. They are all on the same topic. When writer_1 sends a message, only reader_2 should receive the message, while reader_1 should not receive it. How can this be configured or changed?
The text was updated successfully, but these errors were encountered: