Archive for the ‘Toolbx’ Category
Toolbx — running the same host binary on Arch Linux, Fedora, Ubuntu, etc. containers
This is a deep dive into some of the technical details of Toolbx and is a continuation from the earlier post about bypassing the immutability of OCI containers.
The problem
As we saw earlier, Toolbx uses a special entry point for its containers. It’s the toolbox
executable itself.
$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...
This is achieved by bind mounting the toolbox
executable invoked by the user on the hosts to /usr/bin/toolbox
inside the containers. While this has some advantages, it opens the door to one big problem. It means that executables from newer or different host operating systems might be running against older or different run-time environments inside the containers. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.
This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.
When binaries are compiled and linked against newer run-time environments, they may start relying on symbols (ie., non-static global variables, functions, class and struct members, etc.) that are missing in older environments. For example, glibc-2.32 (used in Fedora 33 onwards) added a new version of the pthread_sigmask
symbol. If toolbox
binaries built and linked against glibc-2.32 are run against older glibc versions, then they will refuse to start.
$ objdump -T /usr/bin/toolbox | grep GLIBC_2.32
0000000000000000 DO *UND* 0000000000000000 GLIBC_2.32 pthread_sigmask
This means that one couldn’t use Fedora 32 Toolbx containers on Fedora 33 hosts, or similarly any containers with glibc older than 2.32 on hosts with newer glibc versions. That’s quite the bummer.
If the executables are not ELF binaries, but carefully written POSIX shell scripts, then this problem goes away. Incidentally, Toolbx used to be implemented in POSIX shell, until it was re-written in Go two years ago, which is how it managed to avoid this problem for a while.
Fortunately, Go binaries are largely statically linked, with the notable exception of the standard C library. The scope of the problem would be much bigger if it involved several other dynamic libraries, like in the case of C or C++ programs.

Potential options
In theory, the easiest solution is to build the toolbox
binary against the oldest supported run-time environment so that it doesn’t rely on newer symbols. However, it’s easier said than done.
Usually downstream distributors use build environments that are composed of components that are part of that specific version of the distribution. For example, it will be unusual for an RPM for a certain Fedora version to be deliberately built against a run-time from an older Fedora. Carlos O’Donell had an interesting idea on how to implement this in Fedora by only ever building for the oldest supported branch, adding a noautobuild
file to disable the mass rebuild automation, and having newer branches always inherit the builds from the oldest one. However, this won’t work either. Building against the oldest supported Fedora won’t be enough for Fedora’s Toolbx because, by definition, Toolbx is meant to run different kinds of containers on hosts. The oldest supported Fedora hosts might still be too new compared to containers of supported Debian, Red Hat Enterprise Linux, Ubuntu etc. versions.
So, yes, in theory, this is the easiest solution, but, in practice, it requires a non-trivial amount of cross-distribution collaboration, and downstream build system and release engineering effort.
The second option is to have Toolbx containers provide their own toolbox
binary that’s compatible with the run-time environment of the container. This would substantially complicate the communication between the toolbox
binaries on the hosts and the ones inside the containers, because the binaries on the hosts and containers will no longer be exactly the same. The communication channel between commands like toolbox create
and toolbox enter
running on the hosts, and toolbox init-container
inside the containers can no longer use a private and unstable interface that can be easily modified as necessary. Instead, it would have complicated backwards and forwards compatibility requirements. Other than that, it would complicate bug reports, and every single container on a host may need to be updated separately to fix bugs, with updates needing to be co-ordinated across downstream distributors.
The next option is to either statically link against the standard C library, or disable its use in Go. However, that would prevent us from using glibc’s Name Service Switch to look up usernames and groups, or to resolve host names. The replacement code, written in pure Go, can’t handle enterprise set-ups involving Network Information Service and Lightweight Directory Access Protocol, nor can it talk to host OS services like SSSD, systemd-userdbd or systemd-resolved.
It’s true that Toolbx currently doesn’t support enterprise set-ups with NIS and LDAP, but not using NSS will only make it more difficult to add that support in future. Similarly, we don’t resolve any host names at the moment, but given that we are in the business of pulling content over the network, it can easily become necessary in the future. Disabling the use of NSS will leave the toolbox
binary as this odd thing that behaves differently from the rest of the OS for some fundamental operations.
An extension of the previous option is to split the toolbox
executable into two. One dynamically linked against the standard C library for the hosts, and another that has no dynamic linkage to run inside the containers as their entry point. This can impact backwards compatibility and affect the developer experience of hacking on Toolbx.
Existing Toolbx containers want to bind mount the toolbox
executable from the host to /usr/bin/toolbox
inside the containers and run toolbox init-container
as their entry point. This can’t be changed because of the immutability of OCI containers, and Toolbx simply can’t afford to break existing containers in a way where they can no longer be entered. This means that the toolbox
executable needs to become a shim, without any dynamic linkage, that forwards the invocation to the right executable depending on whether it’s running on the hosts or inside the containers.
That brings us to the developer experience of hacking on Toolbx. The first thing note is that we don’t to go back to using POSIX shell to implement the executable that’s meant to run inside the container. Ondřej spent a lot of effort replacing the POSIX shell implementation of Toolbx, and we don’t want to undo any part of that. Ideally, we would use the same programming language (ie., Go) to implement both executables so that one doesn’t need to learn multiple disparate languages to work on Toolbx. However, even if we do use Go, we would have to be careful not to share code across the two executables, or be aware that they may have subtle differences in behaviour depending on how they might be linked.
Then there’s the developer experience of hacking on Toolbx on Fedora Silverblue and similar OSTree-based OSes, which is what you would do to eat your own dog food. Experiences are always subjective and this one is unique to hacking Toolbx inside a Toolbx. So let’s take a moment to understand the situation.
On OSTree-based OSes, Toolbx containers are used for development, and, generally speaking, it’s better to use container-specific locations invisible to the host as the development prefixes because the generated executables are specific to each container. Executables built on one container may not work on another, and not on the hosts either, because of the run-time problems mentioned above. Plus, it’s good hygiene not to pollute the hosts.
Similar to Flatpak and Podman, Toolbx is a tool that sets up containers. This means that unlike most other executables, toolbox
must be on the hosts because, barring the init-container
command, it can’t work inside the containers. The easiest way to do this, is to have a separate terminal emulator with a host shell, and invoke toolbox
directly from Meson’s build directory in $HOME
that’s shared between the hosts and the Toolbx containers, instead of installing toolbox
to the container-specific development prefixes. Note that this only works because toolbox
has always been implemented in programming languages with none to minimal dynamic linking, and only if you ensure that the Toolbx containers for hacking on Toolbx matches the hosts. Otherwise, you might run into the run-time problems mentioned above.
The moment there is one executable invoking another, the executables need to be carefully placed on the file system so that one can find the other one. This means that either the executables need to be installed into development prefixes or that the shim should have special logic to work out the location of the other binary when invoked directly from Meson’s build directory.
The former is a problem because the development prefixes will likely default to container-specific locations invisible from the hosts, preventing the built executables from being trivially invoked from the host. One could have a separate development prefix only for Toolbx that’s shared between the containers and the hosts. However, I suspect that a lot of existing and potential Toolbx contributors would find that irksome. They either don’t know or want to set up a prefix manually, but instead use something like jhbuild to do it for them.
The latter requires two different sets of logic depending on whether the shim was invoked directly from Meson’s build directory or from a development prefix. At the very least this would involve locating the second executable from the shim, but could grow into other areas as well. These separate code paths would be crucial enough that they would need to be thoroughly tested. Otherwise, Toolbx hackers and users won’t share the same reality. We could start by running our test suite in both modes, and then meticulously increase coverage, but that would come at the cost of a lengthier test suite.
Failed attempts
Since glibc uses symbol versioning, it’s sometimes possible to use some .symver hackery to avoid linking against newer symbols even when building against a newer glibc. This is what Toolbox used to do to ensure that binaries built against newer glibc versions still ran against older ones. However, this doesn’t defend against changes to the start-up code in glibc, like the one in glibc-2.34 that performed some security hardening.
Current solution
Alexander Larsson and Ray Strode pointed out that all non-ancient Toolbx containers have access to the hosts’ /usr
at /run/host/usr
. In other words, Toolbx containers have access to the host run-time environments. So, we decided to ensure that toolbox
binaries always run against the host run-time environments.
The toolbox
binary has a rpath pointing to the hosts’ libc.so
somewhere under /run/host/usr
and it’s dynamic linker (ie., PT_INTERP) is changed to the one inside /run/host/usr
. Unfortunately, there can only be one PT_INTERP entry inside the binary, so there must be a /run/host
on the hosts too for the binary to work on the hosts. Therefore, a /run/host
symbolic link is also created on the host pointing to the hosts’ /
.
The toolbox
binary now looks like this, both on the hosts and inside the Toolbx containers:
$ ldd /usr/bin/toolbox
linux-vdso.so.1 (0x00007ffea01f6000)
libc.so.6 => /run/host/usr/lib64/libc.so.6 (0x00007f6bf1c00000)
/run/host/usr/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f6bf289a000)
It’s been almost a year and thus far this approach has held its own. I am mildly bothered by the presence of the /run/host
symbolic link on the hosts, but not enough to lose sleep over it.
Other options
Recently, Robert McQueen brought up the idea of possibly using the Linux kernel’s binfmt_misc mechanism to modify the toolbox
binary on the fly. I haven’t explored this in any seriousness, but maybe I will if the current set-up doesn’t work out.
Toolbx @ Community Central
At 15:00 UTC today, I will be talking about Toolbx on a new episode of Community Central. It will be broadcast live on BlueJeans Events (formerly Primetime) and the recording will be available on YouTube. I am looking forward to seeing some friendly faces in the audience.

Toolbx — bypassing the immutability of OCI containers
This is a deep dive into some of the technical details of Toolbx. I find myself regularly explaining them to various people, so I thought that I should write them down. Feel free to read and comment, or you can also happily ignore it.
The problem
OCI containers are famous for being immutable. Once a container has been created with podman create
, it’s attributes can’t be changed anymore. For example, the bind mounts, the environment variables, the namespaces being used, and all the other attributes that can be specified via options to the podman create
command. This means that once there’s a Toolbx, it wouldn’t be possible to give it access to a new set of files from the host if the need arose. The Toolbx would have to be deleted and re-created with access to the new paths.
This is a problem, because a Toolbx is where the user sets up her development and troubleshooting environment. Re-creating a Toolbx might mean reinstalling a number of different packages, tweaking configuration files, redeploying various artifacts and so on. Having to repeat all that in the middle of a long hacking session, just because the container’s attributes need to be tweaked, can be annoying.
This is unlike Flatpak containers, where it’s possible to override the permissions of a Flatpak either persistently through flatpak override
or temporarily during flatpak run
.
Secondly, as the Toolbx code evolves, we want to be able to transparently update existing Toolbxes to enable new features and fix bugs. It would be a real drag if users had to consciously re-create their containers.
The solution

Toolbx bypasses this by using a special entry point for the container. Those inquisitive types who have run podman inspect
on a Toolbx container might have noticed that the toolbox
executable itself is the container’s entry point.
$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...
This means that when Toolbx starts a container using podman start
, the toolbox init-container
command gets run as the first process inside the container. Only after this has run, does the user’s interactive shell get spawned.
Instead of setting up the container entirely through podman create
, Toolbx tries to use this reflexive entry point as much as possible. For example, Toolbx doesn’t use podman create --volume /tmp:/tmp
to give access to the host’s /tmp
inside the container. It bind mounts the entire root filesystem from the host at /run/host
in the container with podman create --volume /:/run/host
. Then, later when the container is started, toolbox init-container
recursively bind mounts the container’s /run/host/tmp
to /tmp
. Since the container has its own mount namespace, the /run/host
and /tmp
bind mounts are neatly hidden away from the host.
Therefore, if in future additional host locations need to be exposed within the Toolbx, then those can be added to toolbox init-container
, and once the user restarts the container after updating the toolbox
executable, the new locations will show up inside the existing container. Similarly, if the mount parameters of an existing location need to be tweaked, or if a host location needs to be removed from the container.
This is not restricted to just bind mounts from the host. The same approach with toolbox init-container
is used to configure as many different aspects of the container as possible. For example, setting up users, keeping the timezone and DNS configuration synchronized with the host, and so on.
Further details
One might wonder how a Toolbx container manages to have a toolbox
executable inside it, especially since the toolbox
package is not installed within the container. It is achieved by bind mounting the toolbox
executable invoked by the user on the host to /usr/bin/toolbox
inside the container.
This has some advantages.
There is always only one version of the toolbox
executable that’s involved — the one that’s on the host. This means that the exact invocation of toolbox init-container
, which is baked into the Toolbx and shows up in podman inspect
, is the only interface that needs to be kept stable as the Toolbx code evolves. As long as toolbox init-container
can be invoked with that specific command line, everything else can be changed because it’s the same executable on both the host and inside the container.
If the container had a separate toolbox
package in it, then the user might have to separately update another executable to get the expected results, and we would have to ensure that different mismatched versions of the executable can work with each other across the host and the container. With a growing number of containers, the former would be a nightmare for the user, while the latter would be almost impossible to test.
Finally, having only one version of the toolbox
executable makes it a lot easier for users to file bug reports. There’s only one version to report, not several spread across different environments.
This leads to another problem
Once you let this sink in, you might realize that bind mounting the toolbox
executable from the host into the Toolbx means that an executable from a newer or different operating system might be running against an older or different run-time environment inside the container. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.
This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.
I will leave you with that thought and let you puzzle over it, because it will be the topic of a future post.
Toolbx is now on Matrix
Toolbx now has its own room on matrix.org. Point your Matrix clients to #toolbx:matrix.org and join the conversation.

We are working on setting up an IRC bridge with Libera.Chat but that will take a few more months as we go through the process to register our project channel.
Toolbx: Red Hat is hiring a software engineer
The Desktop Team at Red Hat wants to hire a software engineer to work full-time on Toolbx (formerly known as Toolbox) with me, and hopefully go on to maintain it in the near future. You will be working upstream and downstream (Fedora and RHEL) to improve the developer and troubleshooting experience on OSTree-based Linux operating systems like Fedora Silverblue and CoreOS, and extend some of the benefits to even traditional package-based OSes like Fedora Workstation.

If you are excited to work across the different layers of a modern Linux operating system, with a focus on container and desktop technologies, and aren’t afraid of getting your hands dirty with C and Go, then please go ahead and apply. Toolbx is a relatively young project with a rapidly growing community, so you are sure to have a fun ride.

Toolbox is now Toolbx
Toolbox is being renamed to Container Toolbx or just Toolbx.
I had always been uncomfortable by the generic nature of the term toolbox
and people keep complaining that it’s terribly difficult to search for. Recently, we have been trying to improve the online presence of the project by creating a website and a Twitter handle, and it’s impossible to find any decent Internet real estate with anything toolbox
.
It looks like dropping the penultimate character from words to form names is a thing these days, hence Toolbx.

We haven’t yet renamed the Git repository or anything in the code or the binary or the manuals. Renaming the binary, for example, has implications for existing containers, and we don’t want to cause any needless disruption for users. So, those will gradually happen over time with all the necessary compatibility aliases and such.
Meanwhile, Fedora Magazine has published an interview with yours truly about Toolbx that talks about the history, latest improvements, future direction, and various other aspects of the project.
It should be obvious, but the Toolbx website was made by Jakub Steiner.
You must be logged in to post a comment.