Some Reflections on Writing Unix Daemon
> Rather than having processes detach themselves from the terminal, these managers run daemons as if they were normal (albeit long-running) programs. This slightly simplifies the daemons themselves and provides a more homogeneous experience for the user.
It tremendously simplifies the daemons themselves and indeed does provide a more homogeneous experience for the user. Remember: do one thing only, and do it well. The "daemonizing" part is is a second thing and it belongs in a separate utility. If the user wants to run your daemon "interactively" (e.g. being able to press Ctrl-C to stop it), they should be able to do so by simply running your daemon from the shell. If the user wants to run your daemon "in the background", whatever it means to the user, they can arrange so by themselves. Why is it such a difficult idea for the "daemon" writers to accept is beyond me: there is negative value in a program forcefully detaching itself from every single way to supervise it (double- and triple-forking defeats supervising via wait(), closing FDs defeats watching for the closing of an injected pipe, etc).
SystemD took away the pain. You no longer have to think and reinvent the demonize, logging and so on. Just just start what ever you want in a while(1) loop and write to stdout and stderr. No log rotate nightmare, etc.
This makes daemons easier to debug, as they run just in foreground so that you can start them from the cli to thinker with them.
SystemD is highly discussed and some love and some hate it. But such improvements are unmatched and extremely helpful.
To make daemons easier to write, more uniform, and secure, djb created https://cr.yp.to/daemontools.html and used it for many of his projects
In the corresponding FAQ list, he says this about daemons that detach themselves:
> How can I supervise a daemon that puts itself into the background? When I run inetd, my shell script exits immediately, so supervise keeps trying to restart it.
> Answer: The best answer is to fix the daemon. Having every daemon put itself into the background is bad software design.
> Ever since I have tried to think “can I solve this problem in a way that reuses known conventions or do I really, really have to do something different?” The answer, once I can control my ego, is nearly always “I don’t need to do something different”.
This really is the way of wisdom. Not that there’s not room for improvement — there is — but it is normally more effective to attack the problem at hand than try to solve ancillary problems as well.
It's Unix parlance for a forked child process that detaches from its interactive terminal with the setsid() syscall, becomes a child of init, possibly closes or redirects the 0/1/2 file descriptors to logging facilities, possibly changes its CWD, and no longer receives terminal-originating signals like SIGHUP, SIGTTIN, SIGTTOU, etc.
Apart from that, a robust non-daemon program should use the same defensive techniques. In a short-running command-line utility, it might seem "acceptable" to leak memory, not close resources and not do adequate error-checking, but that's just sloppy programming.
> My experience with snare and pizauth is that Rust is a viable language for writing daemons in. Rust isn’t a perfect language (e.g. unsafe Rust currently has no meaningful semantics so any code which uses unsafe comes with fewer guarantees than C)
What exactly does the author mean when they say that unsafe Rust has "no meaningful semantics"? Is this a term of art in language analysis or is the author just saying "it's weird"?
> There are use-cases for async/await, particularly for single-threaded languages, or for people writing network servers that have to have to deal with vast numbers of queries. Rust is multi-threaded – indeed, its type system forbids most classic multi-threading errors – and very few of us write servers that deal with vast numbers of queries.
This is the portion that I was seeking his feedback on. Rust originally was designed to work with OS native threads. Later Nginx like software emulated threads were proven far more efficient and fast. Here a single thread jumps from connection to connection, instead of waiting for the other side to respond. In C you can do it but you need the ability for a function to resume from the mid of its body where it left off. That asks for "co-routines" which even though possible makes the code complex.
Rust solution is async/await. But now they have two solutions, the more integrated multi threads and then this newly introduced single threaded async/await. It's better to get feedback from people that have worked on it and their good or bad experience.