-
Notifications
You must be signed in to change notification settings - Fork 3k
erts: kill spawned child processes on VM exit (unix only) #9453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
CT Test Results 3 files 141 suites 50m 22s ⏱️ Results for commit 6f9f611. ♻️ This comment has been updated with latest results. To speed up review, make sure that you have read Contributing to Erlang/OTP and that all checks pass. See the TESTING and DEVELOPMENT HowTo guides for details about how to run test locally. Artifacts// Erlang/OTP Github Action Bot |
9f87bc1
to
dba896f
Compare
Hello! I think that we can move forward with this. There is no need to have an option to disable it for now (unless our existing tests shows that it is needed...), but there needs to be testcases to test that it works as expected on both Unix and Windows. I do wonder however if we should send some other signal than |
Good point, TIL that sigkill is untrappable. Looking at I experimented a bit locally to see if |
8efd4f5
to
81abb88
Compare
Now includes a test for normal erl shutdown ( There are a few other flapping tests, I don't think this is related to my patch but can't say for sure... Running the tests on Windows is not going well for me, and it seems there's no CI for that yet? In theory my patch and the test will also run on win32 but I'd like to see that happen. |
81abb88
to
80f301a
Compare
If the grandchild is an Erlang node, it could communicate via Erlang distribution? Otherwise a file seem reasonable.
No, there is no github CI for that yet. I have a branch that I work on from time to time to try to bring it in, but the tests are not stable enough yet. Maybe you can temporarily use it as a base for your changes and you should atleast be able to see if your tests pass or fail? |
A couple of other things that popped into my mind: What do we do when someone does
I know that there are users that rely on being able to spawn daemon processes through |
Great! I'm working on that now, and learned that my proposed feature needs to be reimplemented separately for the win32 spawn driver.
That makes sense, the same principle applies IMHO and it feels consistent to attempt a direct termination any time Erlang will lose its connection to the child. I'll add this.
I see... Interestingly, the "&" in that test is only relevant for allowing os:cmd to return immediately, the shell job control seems to be unimportant. In other words, the test is equivalent to calling open_port and not waiting for the process to finish, so this syntax is more a convenience than a special use case. But it would definitely indicate an intention to start a daemon with no direct link to Erlang, +1 that we should respect this usage! As a tourist to the BEAM, all I can do is describe the options but I don't have instincts for which is the best way to go. We could preserve this "&" usage by only killing the immediate child process, which would be the shell. This still offers some benefits, since the application developer may be able to call open_port with spawn_executable, making their process the immediate child and causing it to be cleaned up without needing a wrapper script. It also simplifies reasoning about the "process group", killing exactly one child process is much more predictble. |
To some extent, I think there's a bigger problem here where we could have a whole new API for managing external processes - the current port API is neither powerful nor ergonomic. As @garazdawi mentioned, today it's not even possible to easily kill the process once you spawn it, and |
80f301a
to
a9b6e1c
Compare
I found that shell "&" assigns the background job to a new process group, which IMHO means that killing children by process group is back on the table. For now however, my patch is rewritten to kill only the direct child process. The latest branch also kills a port's child during port_close. Splitting this responsibility between the main VM process and the forker is causing a memory leak (and a leaky abstraction), and I'm imagining this can be resolved by sending another protocol message to the forker to allow it to perform cleanup such as killing the process, then freeing memory used to track the child. Introducing this new message has some small overhead but I don't see any obvious, existing means for the forker to detect that the port was closed by beam.
+1 that direct OS process management could be a nice addition to the core libraries, but the current iteration can be done without larger changes to the opaque port concept. |
Yes, this seems like a good approach. |
f7eee72
to
f7c336f
Compare
Although I think this is a good idea, when I made the change it turned out to be too aggressive and causes a lot of existing tests to fail (example job output). Looking at a simplified outline for one test, Port = open_port({spawn, "port_test -h0 -o outfile"}, [out]),
Port ! {self(), {command, "echodata123"}},
% race here?
Port ! {self(), close},
receive {Port, closed} -> ok end.
test_server:sleep(500)
{ok, Written} = file:read_file("outfile"), The test finds that outfile was never written and the last match fails with I'm open to suggestions about how to proceed! One could argue that the tests have always been risky, and relied on an undocumented behavior of the port to detect stdin closure, complete its process politely before closing and return Port ! {self(), {command, "echodata123"}},
receive {Port, {data, "done."}} -> ok end.
Port ! {self(), close},
receive {Port, closed} -> ok end. But it feels like application developers have probably been making the same assumption, and they have felt safe sending the |
Good catch, I did not think of that. If we want that behaviour we can later add a |
These tests demonstrate that the VM allows a spawned child OS process to outlive it. Current behavior relies on the child to "politely" watch stdin and exit when the file descriptor is close. Some but not all external processes follow this rule. The next few patches will change VM behavior to kill all child processes on exit, which should fix the tests introduced here.
This new message only has a minor effect in this patch: the forker stops tracking the child process on port_close, so if the child process is still running but stops later while the VM is alive, the forker will no longer send an exit status message to the closed port. The motivation for this patch is mostly just to set up the communication mechanism, to attach more interesting behavior later such as optionally killing the child process.
Previously, the forker start command would use the presence of port_id to indicate whether the caller had specified :exit_status to open_port. This no-op patch splits this out to an explicit second field `want_exit_status`, so that we can always track port_id. Motivation is to allow the os_pid->port_id mapping to be used regardless of the exit_status setting. This patch shouldn't cause any behavior changes.
If the VM goes down, the forker will now respond by killing all spawned children.
Ready to review. This branch is supposed to:
In follow-up work I might try to implement for win32, and will play with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dumping my final thoughts, inline… This was fun!
*/ | ||
|
||
/* | ||
* Test utility which waits for SIGTERM and responds by deleting a temp file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file path is used because it's easy to verify externally, even when pipes to pass messages out of the process may have been rudely crashed.
#include <errno.h> | ||
#include <fcntl.h> | ||
|
||
#ifndef __WIN32__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't run on win32 yet (tests are not executed), but the headers at least allow for successful compilation.
memset(proto, 0, sizeof(ErtsSysForkerProto)); | ||
proto->action = ErtsSysForkerProtoAction_Stop; | ||
proto->u.stop.os_pid = dd->pid; | ||
erl_drv_port_control(forker_port, ERTS_FORKER_DRV_CONTROL_MAGIC_NUMBER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to be picky about whether this message is successfully delivered, I think the main cases not covered by the errno below have to do with the forker going down during VM halt and this patch specifically adds logic to the forker to kill processes and clean up even when Erlang cannot send the final messages.
It wouldn't be helpful to also crash Erlang faster in these cases, better to let the general flushing happen if it was requested.
#endif | ||
} else if (proto->action == ErtsSysForkerProtoAction_Stop) { | ||
if ((res = write(forker_fd, (char*)proto, sizeof(*proto))) < 0) { | ||
if (errno != EAGAIN && errno != EINTR) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are a bit of a random guess. I saw EAGAIN come up in some test cases and found other guard conditions allowing these errors to be non-fatal. That vaguely makes sense, because they have to do with the write
syscall failing on an otherwise operational pipe.
It might be worthwhile to retry the write later? But the precedent I see elsewhere in the code seems to be more practical, this message is a nice-to-have at the moment.
@@ -591,10 +590,17 @@ main(int argc, char *argv[]) | |||
errno = 0; | |||
|
|||
os_pid = fork(); | |||
if (os_pid == 0) | |||
if (os_pid == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Braces are added only to avoid confusion with the scoping block below.
from the uds_fd. */ | ||
DEBUG_PRINT("Failed to write to uds: %d (%d)", uds_fd, errno); | ||
est.os_pid = (pid_t)ibuff[0]; | ||
es = hash_remove(forker_hash, &est); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No change here—get_port_id
already had the side effect of popping from the hash.
res = read_all(sigchld_pipe[0], (char *)ibuff, sizeof(ibuff)); | ||
if (res <= 0) { | ||
ABORT("Failed to read from sigchld pipe: %d (%d)", res, errno); | ||
} | ||
|
||
proto.u.sigchld.port_id = get_port_id((pid_t)(ibuff[0])); | ||
|
||
if (proto.u.sigchld.port_id == THE_NON_VALUE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
port_id
is always present, now. es
can still be missing from the hash, but only in edge cases like double-close or other bad situations.
Might even be worthwhile to emit a debug line if the child isn't found in forker_hash.
@@ -662,6 +665,21 @@ main(int argc, char *argv[]) | |||
return 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be unreachable.
Actually, I'm uncertain about some of the ABORT
exits: my thinking is that these indicate pathological corner cases where we can no longer trust the internal tracking nor child pipes, so it would be useless or even risky to try "atexit"-like cleanup behaviors. This is supported by documentation for glibc abort
:
Up until glibc 2.26, if the abort() function caused process termination, all open streams were closed and flushed (as with fclose(3)). However, in some cases this could result in deadlocks and data corruption. Therefore, starting with glibc 2.27, abort() terminates the process without flushing streams. POSIX.1 permits either possible behavior, saying that abort() "may include an attempt to effect fclose() on all open streams".
That's how I feel, too.
@@ -662,6 +665,21 @@ main(int argc, char *argv[]) | |||
return 1; | |||
} | |||
|
|||
static void kill_child(pid_t os_pid) { | |||
if (os_pid > 0 && kill(os_pid, SIGTERM) != 0) { | |||
DEBUG_PRINT("error killing process %d: %d", os_pid, errno); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should always work, but if it doesn't because of eg. a race condition between the child dying on its own and killing it, continue with trying to kill the other children.
@@ -7475,6 +7475,11 @@ reported to the owning process using signals of the form | |||
|
|||
The maximum number of ports that can be open at the same time can be configured | |||
by passing command-line flag [`+Q`](erl_cmd.md#max_ports) to [erl](erl_cmd.md). | |||
|
|||
When the VM shuts down, spawned executables are sent `SIGTERM` on unix. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"unix" -> "POSIX"? Hopefully this can be reimplemented for win32 in later work, anyway. Is it okay to have divergent behavior on the platforms?
I haven't read this PR in detail, but wanted to suggest sending |
I could imagine this to be true, and HUP has the historical advantage of having nearly unchanged semantics as in the previous assumption: that polite apps quit when their stdin is closed. Hanging up is already nearly identical to closing pipes, see On the other hand, many existing daemons trap HUP to gracefully reload. And apps robust against stdin closure might also ignore HUP. If I had to personify the two signals, my feeling is that HUP would be a vague "goodbye!" while TERM is an explicit but amiable "please terminate yourself now." How to make such a decision! If you were on a desert island with only one unix signal… |
} else if (proto.action == ErtsSysForkerProtoAction_Stop) { | ||
ErtsSysExitStatus est, *es; | ||
est.os_pid = proto.u.stop.os_pid; | ||
es = hash_remove(forker_hash, &est); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we remove it from the hash here? Do we not want closed but alive processes to be killed when the VM shuts down?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We no longer want to receive an exit message from the spawned process, and I was getting conservative about memory, but I now see that we can keep both behaviors by changing the hash_remove to:
es.want_exit_status = false
(Also, my concern about memory is nonsense since the handful of bytes in es
is tracking an entire, in-memory child process.)
Thanks for making this, I think the code looks fine except one comment that I added. As this PR is a potentially breaking change, it will be part of the next major release, that is Erlang/OTP 29.
Before merging this we need to atleast investigate whether it is possible to do something similar on Windows or not. We want the behaviour of the different OSs to be as similar as possible, however this particular area is already full with platform specific stuff, so it is not super important. |
* Child erl must be run with a shell if it should die on EOF. * Remove unexpected exit status matches. These will be caught by the timeout anyway, and extra cases were muddying the intention. * Use ct:sleep when possible. Still relies on manual sleeping and temporary files, which should be replaced by distribution in later clean-up.
port_close has its normal effects and no exit message will be received even if exit_status was requested on the initial open_port, but now these children will still be killed when the VM halts. Includes a test for this case.
For sure. I've installed the win32 development environment and can compile a release, but still a bit stuck on running tests. It would be possible to implement the feature using manual smoke testing on the command line in a pinch. I've also been waiting for the unix behavior to solidify so that I'm not working in windows any more than necessary—I feel comfortable going ahead with that work now. |
My last MS programming was a DOS graphics library, so please take my findings with a grain of salt. Here's what I learned from RTFM:
The nice way to shut down a process is to use Then we get into murkier territory that I don't quite understand: I think that a process can open one or more windows. I don't know whether the main process itself counts as a window even if it's "hidden" but I think it does. A process is made of up one or more threads. There are a variety of library methods to send a message to a window or a thread, and in start_erl we have a precedent for calling We can start by tracking spawned processes and sending WM_CLOSE from the VM during halt, and then as follow-up work the whole arrangement could be improved by perhaps creating a forker process like erl_child_setup on unix, and letting it do the cleanup if the VM crashes. Maybe the forker can be pulled up a level so that most of its logic is shared between win32 and unix. |
This is a very rough proof-of-concept for discussion, which ensures all children spawned with open_port are terminated along with the BEAM.
Will be discussed in https://erlangforums.com/t/open-port-and-zombie-processes