-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RSDK-9591 - Kill all lingering module process before exiting #4657
base: main
Are you sure you want to change the base?
Conversation
@@ -280,6 +283,9 @@ func (s *robotServer) serveWeb(ctx context.Context, cfg *config.Config) (err err | |||
case <-doneServing: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're here, I'd recommend removing this whole case <-doneServing
stuff (and incidentally the select statement) and move straight to the killing/logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is that? I thought the justification above for the code as it is makes sense
@@ -280,6 +283,9 @@ func (s *robotServer) serveWeb(ctx context.Context, cfg *config.Config) (err err | |||
case <-doneServing: | |||
return true | |||
default: | |||
if myRobot != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If myRobot
can be nil here -- this is a data race.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would that be an issue? I can see that myRobot could have started some processes but not yet returned, but I don't know if we can protect against that completely.
// Kill will attempt to kill any processes on the system started by the robot as quickly as possible. | ||
// This operation is not clean and will not wait for completion. | ||
func (r *localRobot) Kill() { | ||
if r.manager != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we justify this isn't a data race?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r.manager could be nil if startup fails/hangs, but yes, it could also be a data race.
This is part two of two PRs that will hopefully help with shutting down all module processes before viam-server exits. Part one is here
This is still a draft as I'm looking for thoughts and ideas around making this better.
Before doing this, I looked into assigning module processes to the same process group as the viam server and just kill the process group. However, we already have each module and process assign to unique process groups, and we use that property to kill each modules and processes separately if necessary. Changing that behavior would be risky, so did not pursue that path further.
We could kill each process in mod manager directly using the exposed unixpid, but figured we could just do it within each managed process, that way we get support in windows as well. It does mean I added Kill() in a few interfaces, but it will hopefully be extensible in case anything else may need killing.
The idea behind this is for a Kill() call to propagate from the viam-server at the end of 90s, and we should not block on anything if possible. The Kill() does not care about the resource graph, only that we kill processes/module processes spawned by the server. I did not do the killing in parallel, since the calls will not block. I can see things racing with Close(), but I think the mitigation would be to make sure that kill/close is idempotent and will not panic if overlapping. This Kill() call does happen in the same goroutine that eventually calls log.Fatal, is that good enough for now or should we create a different goroutine so that we can guarantee that the viam-server exits by the 90s mark?
Ideas for testing? I've tested on a python module and observed that the module process does get killed, and would be good to test on setups where this is happening.