init.txt 4.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
  1. How the init works:
  2. - The entry point is /init.
  3. - /init sets PATH according to the user-configurable
  4. /etc/s6-overlay/config/global_path but makes sure it can still access the
  5. required binaries no matter where they are.
  6. - /init runs /package/admin/s6-overlay/libexec/preinit as root, even if
  7. the container is running with the USER directive.
  8. * preinit ensures that /run exists and is writable and executable, and
  9. that /var/run is a symlink to it.
  10. * preinit deletes and recreates /run/s6 and sets it to the real uid/gid
  11. of the process running the container.
  12. - /init execs into /package/admin/s6-overlay/libexec/stage0
  13. - stage0 invokes s6-linux-init-maker to create the stage 1 infrastructure
  14. depending on the S6_* environment variables given to the container.
  15. s6-l-i-m is normally intended to be run offline, but since we need a lot of
  16. runtime configurability, we run it online here; it works.
  17. - stage0 execs into the "init" script created by s6-l-i-m, which is the
  18. real stage1 init that normal machines boot on. It's in /run/s6/basedir/bin/init
  19. (it had to be created at run-time by stage0, which is why it's under /run)
  20. but it's just an execution of the s6-linux-init binary with some options.
  21. - stage1 sets up the supervision tree on /run/service, with (depending on
  22. the value of $S6_LOGGING) a catch-all logger logging to /run/uncaught-logs.
  23. * There are two early services: the catch-all logger (if required), and
  24. a service named s6-linux-init-shutdownd, which you can ignore - it's only
  25. active when the container is going to shut down.
  26. - stage1 execs into s6-svscan, which will remain pid 1 for the rest of
  27. the container's lifetime.
  28. - When the supervision tree is operational, stage2 runs; this is the
  29. /run/s6/basedir/scripts/rc.init script, whose source you can read in
  30. /package/admin/s6-overlay/etc/s6-linux-init/skel/rc.init
  31. - stage2 reads two s6-rc source directories: the system one in
  32. /package/admin/s6-overlay/etc/s6-rc/sources, and a user-provided one
  33. in /etc/s6-overlay/s6-rc.d which must provide a bundle named "user"
  34. (which can be empty). It compiles these source directories into a
  35. compiled s6-rc database in /run/s6/db. s6-rc-compile is normally intended
  36. to be run offline, but just like with s6-l-i-m, we don't care and we
  37. run it online here because we're going for flexibility and simplicity
  38. for users over a bootability guarantee and optimization of speed.
  39. - stage2 runs the s6-rc engine on that compiled database. This brings
  40. up several services, in that order: (note that S6_RUNTIME_PROFILE is
  41. supported for legacy stuff)
  42. * fix-attrs: reads the files in /etc/fix-attrs.d and fixes permissions
  43. accordingly. This is deprecated; please fix your file permissions from
  44. outside the container instead (or in your Dockerfile).
  45. * legacy-cont-init: runs all the scripts in /etc/cont-init.d
  46. * user: all the services defined in the user bundle, their source
  47. is in /etc/s6-overlay/s6-rc.d - that's where users should migrate
  48. their services in order to benefit from parallelism and dependency
  49. management. By default that user bundle is empty, unless the user has
  50. installed the syslogd-overlay tarball, in which case it contains the
  51. services that implement syslogd.
  52. * legacy-services: all the service directories in /etc/services.d
  53. are copied to /run/s6/legacy-services and linked to the scandir in
  54. /run/service, then s6-svscan is notified. Note that all of this happens
  55. *after* the user bundle has completed: legacy services are the last
  56. ones started.
  57. - That's it, the container is fully operational.
  58. - If there is no CMD, stage2 exits, having started all its services,
  59. and the container will keep running until something or someone instructs
  60. it to exit.
  61. - If there is a CMD, instead of exiting, stage2 spawns it, and waits
  62. for it to finish. Then it stops the container and returns the exit
  63. code of the CMD to the host.
  64. To make the container stop with a given exitcode, run:
  65. echo $exitcode > /run/s6-linux-init-container-results/exitcode && /run/s6/basedir/bin/halt
  66. Signals to s6-svscan (typically triggered by an outside "docker stop" command),
  67. s6-svscanctl commands, or manually running /run/s6/basedir/bin/poweroff or
  68. /run/s6/basedir/bin/shutdown should work as well, but then you do not have
  69. control on the exit code.