s6-rc -a list
s6-rc -d list
or
s6-rc-db list services
s6-rc -u change foo
s6-rc -da change
s6-rc-db -u script foo | xargs -0 printf "%s "
s6-rc-db pipeline foo
s6-rc-db -d all-dependencies foo
s6-rc-update -n newcompiled
The first line is the s6-rc invocation that will bring the old services down. The services that will stop are listed after -- change. The second line is the s6-rc invocation that will bring the new services up. The services that will start are listed after -- change.
Because parsing sucks. Writing parsers is an annoying, ungrateful task, with significant risks of bugs and security holes; and automatic parser generators produce big and inefficient code - and they are not immune to bugs or security holes either. For security, efficiency and maintainability reasons, I prefer to focus my efforts on code that actually does stuff, not code that parses a text file.
Using the filesystem as a key-value store is a good technique to avoid parsing, and skarnet.org packages do it everywhere: for instance, s6-envdir uses the file name as a key and the file contents as a value. The s6-rc-compile source format is just another instance of this technique.
The source format generally plays well with automated tools, be it for reading, as s6-rc-compile does, as for writing. I fully expect it to be used as the input (resp. the output) of some automated tools that would convert service definitions to (resp. from) another format, such as systemd unit files, sysv-rc scripts or OpenRC scripts; at least the s6-rc source format will make it easy on those tools.
And if you love configuration files, are ok with writing a parser (which is indubitably easier to do in other languages than C), and want to write a program that takes a text file, parses it and outputs a service definition directory in the s6-rc-compile source format, it should also be rather easy - please, feel free!
Use bundles. Bundles are the solution to most of the questions in the same vein.
Let's say you want to provide a ssh daemon, and have two possible implementations, opensshd and dropbear, but you want to provide a virtual service named sshd.
Define your two longruns, opensshd and dropbear; then define a bundle named sshd that only contains your default implementation, opensshd. Use the name sshd in your dependencies. When you run s6-rc-compile, all the dependencies will resolve to opensshd, and the compiled service database will consider opensshd to be the "real" service; but users will still be able to run s6-rc commands involving sshd. And if you want to change the default to dropbear, just change the sshd/contents file to dropbear, recompile the database, and run s6-rc-update.
The advantage of proceeding this way is that online service dependencies are kept very simple: dependencies are a directed acyclic graph, which is easy to handle - that is the reason why the compiled database is small, and why the s6-rc program is so small and fast. There are "AND" dependencies, but no "OR" dependencies, which would introduce great complexity both in data structures and in the dependency resolution engine. s6-rc handles this complexity offline.
You can use bundles to represent any collection of services, and write all your dependencies using only bundle names if you want. Bundles have multiple uses, and virtual services are definitely one of them.
Yes.
If you are using a service manager such as sysv-rc or OpenRC, you have a collection of init scripts that can be called with at least start and stop arguments. You also know dependencies between those scripts, or at least a reasonable ordering.
You can automatically generate a source directory for s6-rc-compile. For every init script /etc/init.d/foo that you have, create a service definition directory named foo:
You can now run compile your s6-rc service database, and use the s6-rc engine as your service manager. Transitions will use your original init scripts, and the supervision features of s6 will not be used, but you will get proper dependency tracking and easy state changes.
Then, you can improve the database by changing services one by one, turning them into longruns so daemons get supervised when applicable, rewriting them into bundles calling more atomic services if needed, etc. That can be done at your own pace, one service at a time, while still getting some benefits from s6-rc; and if an iteration doesn't work, you can always roll back while you fix it.
You have better than runlevels. You have bundles.
When writing your service database in source format, take note of the common sets of services that you like to run together, what other init systems sometimes call runlevels. For each of those sets, define a bundle containing all those services. For instance, you could define a runlevel-1 bundle that contains only a single getty, a runlevel-2 bundle that contains only your local services and no network, a runlevel-3 bundle that contains runlevel-2 as well as network services, and a runlevel-5 bundle that contains runlevel-3 and your desktop. You can even create a runlevel-0 bundle that contains nothing at all!
In your boot script (/etc/rc.init, for instance, if you're using s6-linux-init), after invoking s6-rc-init, just ask s6-rc to start the set of services you want up by default: s6-rc change runlevel-5.
If you later want to change your current set of services, you can then tell s6-rc to switch, using the -p option to make sure to stop services you don't want up anymore: s6-rc -p change runlevel-2.
Bundles are easy to use, they're flexible, and they're powerful. They give you the same level of functionality as runlevels would, and more. You can even add bundles to compiled service databases - including the live one - or remove bundles from them without having to recompile them: that's what the s6-rc-bundle utility is for.
When in doubt, use bundles.
Because those intermediate states are unnecessary.
From the machine's point of view, things are simple: a service is either up or it's not. If a service fails to start, then it's still down. Note that it is recommended to write transactional oneshots for this very reason: it is simple to try starting again a service that failed to start, but it is hard to recover from a service that is only "partially up" - and this is true whether you're using s6-rc or another service manager.
Service managers that use intermediate states do so in order to keep track of what they're doing and what they have done. But this introduces needless complexity: the reality is that the service is either up or down, it's either in the state you wanted it to be or not. If it's in some other, weird, state, then the service scripts have not been properly designed - they are not transactional.
s6-rc does not keep track of "failed" states: a service that fails to start simply remains down, and s6-rc exits 1 to report that something went wrong. To know what services failed to start, compare the result of s6-rc -a list against your expected machine state.
The reason for this design is simple: if the s6-rc process is killed in the middle of a transition while a service state is "starting", what should the next invocation do? This is unclear, and the intermediate state introduces ambiguity where there should not be. Also, if there is a "failed" service, what should the next invocation do? Try and restart it, or not? This depends on what the user wants; this is policy, not mechanism. Simply reporting the error while keeping the state as "down" allows users to apply their chosen policies - see below.
Keep it simple, stupid.
In the world of software development, it is important to distinguish mechanism from policy. Mechanism is "how do I perform the job", and should, theoretically, be addressed by software authors. Policy is "what are the details of the job I perform, where should I put my files, what conventions do I use", and should, theoretically, be addressed by Unix distributions.
Like the rest of skarnet.org software, s6-rc aims to provide mechanism, not policy: it is OS-agnostic and distribution-agnostic. Providing boot scripts, or anything of this kind, would go against this principle; it is possible that a policy defined by software conflicts with a policy defined by a distribution, for instance the provided boot scripts do not match the distribution's needs, and so the distributors have to patch the software!
The s6-rc tools only provide mechanism, so they can be used as is by individual users, or by a distribution. They do not need to be patched. It is up to distributions to provide their own policy surrounding those tools, including complete service databases. It is literally the distributors' job!
OpenRC is a different case, because it was developed by and for a Linux distribution, so with that in mind, the OpenRC developers did not have to think much about separating mechanism from policy. It works very well for Gentoo and Gentoo-derived distributions; but it requires adaptation and more work for the admin to use OpenRC outside of that frame.