-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] About the usage of shared memory in docker #213
Comments
I've changed a bit cycloneconfig to enable tracing. When I start script, it reports its iceoryx address:
Subscriber:
When I interrupt them, iceoryx reports the following:
Does it mean that everything works as expected? If yes, why don't I see my processes in ipcs? |
From a look at your code I think it should be happy to use Iceoryx. I should try it, but ... In Unix/Linux there are multiple ways of creating shared memory, and System V shared memory (which
Maybe some other trick exists, a lot has happened in recent years and I've not tried to follow it all, but these are the classic ones. I'm pretty sure Iceoryx uses the third. You can see it if you look at the VM maps, either by To find out if it really uses Iceoryx for this data one option is to dig deeper in the Cyclone traces, it will tell you in the discovery output which addresses it uses and there you can find whether it uses Iceoryx. Or, you can use the Iceoryx introspection tool, it will show you what's going on there. Finally, in a case like this where it is basically everything-via-iceoryx or everything-over-the-network, looking at network statistics will also work. |
Thank you for your such detailed reply! Suppose the PID of one of my processes is I've also checked top -H, for the recvUC it shows TIME+ like 1:49 whereas TIME+ for python process is like 11:36. Is it the ratio you are writing about? Finally, in the However, the thing that concerns me the most is that when I stop the iceoryx and rerun both scripts without |
bump this |
Sorry that you had to bump it ... 1:49 for
That's suspicious. A lot of the DDS work happens in the background, so if the Python application is simply taking a lot of time to process the data, it is possible that its execution time is almost independent of the work done in DDS regardless of whether it uses Iceoryx or the loopback interface. I don't know exactly what Linux outputs with |
hi @morkovka1337! iox-roudi -l debug the output will be contain information about apps registration like this Reserving 22740232 bytes in the shared memory [iceoryx_mgmt]
[ Reserving shared memory successful ]
2023-10-16 14:47:03.299 [ Debug ]: Registered memory segment 0x7f6164ed9000 with size 22740232 to id 1
Reserving 149264720 bytes in the shared memory [root]
[ Reserving shared memory successful ]
2023-10-16 14:47:03.350 [ Debug ]: Roudi registered payload data segment 0x7f615c07f000 with size 149264720 to id 2
RouDi is ready for clients and after you will start your 2023-10-16 14:48:38.859 [ Debug ]: Registered new application iceoryx_rt_140_1697467718858636401
2023-10-16 14:48:38.860 [ Debug ]: Created new ConditionVariable for application iceoryx_rt_140_1697467718858636401
2023-10-16 14:48:39.619 [ Debug ]: Registered new application iceoryx_rt_139_1697467719618867611
2023-10-16 14:48:39.620 [ Debug ]: Created new ConditionVariable for application iceoryx_rt_139_1697467719618867611 for right now we know that lsof -r1 /dev/shm/ | grep python3 as you can see there is python3 73782 root mem REG 0,25 149264720 3291 /dev/shm/root
python3 73782 root mem REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
python3 73782 root 6u REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
python3 73782 root 7u REG 0,25 149264720 3291 /dev/shm/root
python3 73811 root mem REG 0,25 149264720 3291 /dev/shm/root
python3 73811 root mem REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
python3 73811 root 6u REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
python3 73811 root 7u REG 0,25 149264720 3291 /dev/shm/root also lets check next command. lsof -r1 /dev/shm/ | grep iox-roudi as you can see there is another reserved segment that used by iox-roudi 73617 root mem REG 0,25 149264720 3291 /dev/shm/root
iox-roudi 73617 root mem REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
iox-roudi 73617 root 4u REG 0,25 22740232 3290 /dev/shm/iceoryx_mgmt
iox-roudi 73617 root 5u REG 0,25 149264720 3291 /dev/shm/root but wait let's check which processes associated with PIDs that used shared memory. pmap -x 73811
73811: python3 213-docker/s_sub.py pmap -x 73782
73782: python3 213-docker/s_pub.py from here we already can determine that we are using about ~$ top -H| grep python3 # with shm -> | SHM |
73782 root 20 0 1841604 96988 39840 R 99.9 0.3 2:40.20 python3
73811 root 20 0 1832428 103272 39360 R 94.4 0.3 2:39.77 python3
73811 root 20 0 1832428 103524 39360 R 99.9 0.3 2:42.80 python3
73782 root 20 0 1837600 91696 39840 R 99.3 0.3 2:43.21 python3
~$ top -H| grep python3 # without shm ->| SHM |
74350 root 20 0 1591340 96548 39680 R 99.9 0.3 0:13.32 python3
74377 root 20 0 1582120 102960 39360 R 94.4 0.3 0:10.94 python3
74377 root 20 0 1582120 101448 39360 R 99.9 0.3 0:13.97 python3
74350 root 20 0 1587292 92356 39680 R 99.7 0.3 0:16.33 python3 Footnotes |
Thank you for such a detailed analysis of my sutiation! I've checked previously the output of the roudi and saw the lines above about regestering memory segment. I will also do the rest steps and write here about my results. |
@Splinter1984 I've checked all the proposed steps and can fully reproduce your output. BTW, regrarding the situation that with and without roudi loop time of send \receive is the same. I've tried to run containers without setting the CYCLONEDDS_URI variable. The result suprisingly became order of magnitude faster (loop time ~0.003 sec vs ~0.05 sec when using CYCLONEDDS_URI var.
|
I've realised that previously I was checking wrong loop time. What I was checking actually was serialization time over network and shared memory. Instead, I've modified my
So I run it with |
hi @morkovka1337. |
@Splinter1984 thank you, I will try it |
Hi,
I'm trying to setup environment and other thins to be able to transfer messages using shared memory. First of all, here is my Image class:
Image class
Note that image array has a fixed size. Then I implement publisher and subscriber:
Publisher
Subscriber
I've checked the requirements to the QoS for both publisher and subscriber here (as far as I can see they should be identical).
Next step I'm running iceoryx roudi in one docker container with
--net=host --ipc=host -v /dev:/dev -v /tmp:/tmp
. I also set the environment variableCYCLONEDDS_URI
to the config file. I'm using default config file:CYclonedds uri config
Finally I run above scripts in the docker containers with the same net and ipc options. They work fine and subscriber receives messages from publisher. What I'm trying to understand:
ipcs -m
shows only firefox and vs code processes (I've checked pids via `ps -p ). Should my subscriber and publisher be there if everything works as expected?The text was updated successfully, but these errors were encountered: