Archive for the ‘Podman’ Category
Toolbx — running the same host binary on Arch Linux, Fedora, Ubuntu, etc. containers
This is a deep dive into some of the technical details of Toolbx and is a continuation from the earlier post about bypassing the immutability of OCI containers.
The problem
As we saw earlier, Toolbx uses a special entry point for its containers. It’s the toolbox
executable itself.
$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...
This is achieved by bind mounting the toolbox
executable invoked by the user on the hosts to /usr/bin/toolbox
inside the containers. While this has some advantages, it opens the door to one big problem. It means that executables from newer or different host operating systems might be running against older or different run-time environments inside the containers. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.
This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.
When binaries are compiled and linked against newer run-time environments, they may start relying on symbols (ie., non-static global variables, functions, class and struct members, etc.) that are missing in older environments. For example, glibc-2.32 (used in Fedora 33 onwards) added a new version of the pthread_sigmask
symbol. If toolbox
binaries built and linked against glibc-2.32 are run against older glibc versions, then they will refuse to start.
$ objdump -T /usr/bin/toolbox | grep GLIBC_2.32
0000000000000000 DO *UND* 0000000000000000 GLIBC_2.32 pthread_sigmask
This means that one couldn’t use Fedora 32 Toolbx containers on Fedora 33 hosts, or similarly any containers with glibc older than 2.32 on hosts with newer glibc versions. That’s quite the bummer.
If the executables are not ELF binaries, but carefully written POSIX shell scripts, then this problem goes away. Incidentally, Toolbx used to be implemented in POSIX shell, until it was re-written in Go two years ago, which is how it managed to avoid this problem for a while.
Fortunately, Go binaries are largely statically linked, with the notable exception of the standard C library. The scope of the problem would be much bigger if it involved several other dynamic libraries, like in the case of C or C++ programs.

Potential options
In theory, the easiest solution is to build the toolbox
binary against the oldest supported run-time environment so that it doesn’t rely on newer symbols. However, it’s easier said than done.
Usually downstream distributors use build environments that are composed of components that are part of that specific version of the distribution. For example, it will be unusual for an RPM for a certain Fedora version to be deliberately built against a run-time from an older Fedora. Carlos O’Donell had an interesting idea on how to implement this in Fedora by only ever building for the oldest supported branch, adding a noautobuild
file to disable the mass rebuild automation, and having newer branches always inherit the builds from the oldest one. However, this won’t work either. Building against the oldest supported Fedora won’t be enough for Fedora’s Toolbx because, by definition, Toolbx is meant to run different kinds of containers on hosts. The oldest supported Fedora hosts might still be too new compared to containers of supported Debian, Red Hat Enterprise Linux, Ubuntu etc. versions.
So, yes, in theory, this is the easiest solution, but, in practice, it requires a non-trivial amount of cross-distribution collaboration, and downstream build system and release engineering effort.
The second option is to have Toolbx containers provide their own toolbox
binary that’s compatible with the run-time environment of the container. This would substantially complicate the communication between the toolbox
binaries on the hosts and the ones inside the containers, because the binaries on the hosts and containers will no longer be exactly the same. The communication channel between commands like toolbox create
and toolbox enter
running on the hosts, and toolbox init-container
inside the containers can no longer use a private and unstable interface that can be easily modified as necessary. Instead, it would have complicated backwards and forwards compatibility requirements. Other than that, it would complicate bug reports, and every single container on a host may need to be updated separately to fix bugs, with updates needing to be co-ordinated across downstream distributors.
The next option is to either statically link against the standard C library, or disable its use in Go. However, that would prevent us from using glibc’s Name Service Switch to look up usernames and groups, or to resolve host names. The replacement code, written in pure Go, can’t handle enterprise set-ups involving Network Information Service and Lightweight Directory Access Protocol, nor can it talk to host OS services like SSSD, systemd-userdbd or systemd-resolved.
It’s true that Toolbx currently doesn’t support enterprise set-ups with NIS and LDAP, but not using NSS will only make it more difficult to add that support in future. Similarly, we don’t resolve any host names at the moment, but given that we are in the business of pulling content over the network, it can easily become necessary in the future. Disabling the use of NSS will leave the toolbox
binary as this odd thing that behaves differently from the rest of the OS for some fundamental operations.
An extension of the previous option is to split the toolbox
executable into two. One dynamically linked against the standard C library for the hosts, and another that has no dynamic linkage to run inside the containers as their entry point. This can impact backwards compatibility and affect the developer experience of hacking on Toolbx.
Existing Toolbx containers want to bind mount the toolbox
executable from the host to /usr/bin/toolbox
inside the containers and run toolbox init-container
as their entry point. This can’t be changed because of the immutability of OCI containers, and Toolbx simply can’t afford to break existing containers in a way where they can no longer be entered. This means that the toolbox
executable needs to become a shim, without any dynamic linkage, that forwards the invocation to the right executable depending on whether it’s running on the hosts or inside the containers.
That brings us to the developer experience of hacking on Toolbx. The first thing note is that we don’t to go back to using POSIX shell to implement the executable that’s meant to run inside the container. Ondřej spent a lot of effort replacing the POSIX shell implementation of Toolbx, and we don’t want to undo any part of that. Ideally, we would use the same programming language (ie., Go) to implement both executables so that one doesn’t need to learn multiple disparate languages to work on Toolbx. However, even if we do use Go, we would have to be careful not to share code across the two executables, or be aware that they may have subtle differences in behaviour depending on how they might be linked.
Then there’s the developer experience of hacking on Toolbx on Fedora Silverblue and similar OSTree-based OSes, which is what you would do to eat your own dog food. Experiences are always subjective and this one is unique to hacking Toolbx inside a Toolbx. So let’s take a moment to understand the situation.
On OSTree-based OSes, Toolbx containers are used for development, and, generally speaking, it’s better to use container-specific locations invisible to the host as the development prefixes because the generated executables are specific to each container. Executables built on one container may not work on another, and not on the hosts either, because of the run-time problems mentioned above. Plus, it’s good hygiene not to pollute the hosts.
Similar to Flatpak and Podman, Toolbx is a tool that sets up containers. This means that unlike most other executables, toolbox
must be on the hosts because, barring the init-container
command, it can’t work inside the containers. The easiest way to do this, is to have a separate terminal emulator with a host shell, and invoke toolbox
directly from Meson’s build directory in $HOME
that’s shared between the hosts and the Toolbx containers, instead of installing toolbox
to the container-specific development prefixes. Note that this only works because toolbox
has always been implemented in programming languages with none to minimal dynamic linking, and only if you ensure that the Toolbx containers for hacking on Toolbx matches the hosts. Otherwise, you might run into the run-time problems mentioned above.
The moment there is one executable invoking another, the executables need to be carefully placed on the file system so that one can find the other one. This means that either the executables need to be installed into development prefixes or that the shim should have special logic to work out the location of the other binary when invoked directly from Meson’s build directory.
The former is a problem because the development prefixes will likely default to container-specific locations invisible from the hosts, preventing the built executables from being trivially invoked from the host. One could have a separate development prefix only for Toolbx that’s shared between the containers and the hosts. However, I suspect that a lot of existing and potential Toolbx contributors would find that irksome. They either don’t know or want to set up a prefix manually, but instead use something like jhbuild to do it for them.
The latter requires two different sets of logic depending on whether the shim was invoked directly from Meson’s build directory or from a development prefix. At the very least this would involve locating the second executable from the shim, but could grow into other areas as well. These separate code paths would be crucial enough that they would need to be thoroughly tested. Otherwise, Toolbx hackers and users won’t share the same reality. We could start by running our test suite in both modes, and then meticulously increase coverage, but that would come at the cost of a lengthier test suite.
Failed attempts
Since glibc uses symbol versioning, it’s sometimes possible to use some .symver hackery to avoid linking against newer symbols even when building against a newer glibc. This is what Toolbox used to do to ensure that binaries built against newer glibc versions still ran against older ones. However, this doesn’t defend against changes to the start-up code in glibc, like the one in glibc-2.34 that performed some security hardening.
Current solution
Alexander Larsson and Ray Strode pointed out that all non-ancient Toolbx containers have access to the hosts’ /usr
at /run/host/usr
. In other words, Toolbx containers have access to the host run-time environments. So, we decided to ensure that toolbox
binaries always run against the host run-time environments.
The toolbox
binary has a rpath pointing to the hosts’ libc.so
somewhere under /run/host/usr
and it’s dynamic linker (ie., PT_INTERP) is changed to the one inside /run/host/usr
. Unfortunately, there can only be one PT_INTERP entry inside the binary, so there must be a /run/host
on the hosts too for the binary to work on the hosts. Therefore, a /run/host
symbolic link is also created on the host pointing to the hosts’ /
.
The toolbox
binary now looks like this, both on the hosts and inside the Toolbx containers:
$ ldd /usr/bin/toolbox
linux-vdso.so.1 (0x00007ffea01f6000)
libc.so.6 => /run/host/usr/lib64/libc.so.6 (0x00007f6bf1c00000)
/run/host/usr/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f6bf289a000)
It’s been almost a year and thus far this approach has held its own. I am mildly bothered by the presence of the /run/host
symbolic link on the hosts, but not enough to lose sleep over it.
Other options
Recently, Robert McQueen brought up the idea of possibly using the Linux kernel’s binfmt_misc mechanism to modify the toolbox
binary on the fly. I haven’t explored this in any seriousness, but maybe I will if the current set-up doesn’t work out.
Toolbx @ Community Central
At 15:00 UTC today, I will be talking about Toolbx on a new episode of Community Central. It will be broadcast live on BlueJeans Events (formerly Primetime) and the recording will be available on YouTube. I am looking forward to seeing some friendly faces in the audience.

Toolbx — bypassing the immutability of OCI containers
This is a deep dive into some of the technical details of Toolbx. I find myself regularly explaining them to various people, so I thought that I should write them down. Feel free to read and comment, or you can also happily ignore it.
The problem
OCI containers are famous for being immutable. Once a container has been created with podman create
, it’s attributes can’t be changed anymore. For example, the bind mounts, the environment variables, the namespaces being used, and all the other attributes that can be specified via options to the podman create
command. This means that once there’s a Toolbx, it wouldn’t be possible to give it access to a new set of files from the host if the need arose. The Toolbx would have to be deleted and re-created with access to the new paths.
This is a problem, because a Toolbx is where the user sets up her development and troubleshooting environment. Re-creating a Toolbx might mean reinstalling a number of different packages, tweaking configuration files, redeploying various artifacts and so on. Having to repeat all that in the middle of a long hacking session, just because the container’s attributes need to be tweaked, can be annoying.
This is unlike Flatpak containers, where it’s possible to override the permissions of a Flatpak either persistently through flatpak override
or temporarily during flatpak run
.
Secondly, as the Toolbx code evolves, we want to be able to transparently update existing Toolbxes to enable new features and fix bugs. It would be a real drag if users had to consciously re-create their containers.
The solution

Toolbx bypasses this by using a special entry point for the container. Those inquisitive types who have run podman inspect
on a Toolbx container might have noticed that the toolbox
executable itself is the container’s entry point.
$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...
This means that when Toolbx starts a container using podman start
, the toolbox init-container
command gets run as the first process inside the container. Only after this has run, does the user’s interactive shell get spawned.
Instead of setting up the container entirely through podman create
, Toolbx tries to use this reflexive entry point as much as possible. For example, Toolbx doesn’t use podman create --volume /tmp:/tmp
to give access to the host’s /tmp
inside the container. It bind mounts the entire root filesystem from the host at /run/host
in the container with podman create --volume /:/run/host
. Then, later when the container is started, toolbox init-container
recursively bind mounts the container’s /run/host/tmp
to /tmp
. Since the container has its own mount namespace, the /run/host
and /tmp
bind mounts are neatly hidden away from the host.
Therefore, if in future additional host locations need to be exposed within the Toolbx, then those can be added to toolbox init-container
, and once the user restarts the container after updating the toolbox
executable, the new locations will show up inside the existing container. Similarly, if the mount parameters of an existing location need to be tweaked, or if a host location needs to be removed from the container.
This is not restricted to just bind mounts from the host. The same approach with toolbox init-container
is used to configure as many different aspects of the container as possible. For example, setting up users, keeping the timezone and DNS configuration synchronized with the host, and so on.
Further details
One might wonder how a Toolbx container manages to have a toolbox
executable inside it, especially since the toolbox
package is not installed within the container. It is achieved by bind mounting the toolbox
executable invoked by the user on the host to /usr/bin/toolbox
inside the container.
This has some advantages.
There is always only one version of the toolbox
executable that’s involved — the one that’s on the host. This means that the exact invocation of toolbox init-container
, which is baked into the Toolbx and shows up in podman inspect
, is the only interface that needs to be kept stable as the Toolbx code evolves. As long as toolbox init-container
can be invoked with that specific command line, everything else can be changed because it’s the same executable on both the host and inside the container.
If the container had a separate toolbox
package in it, then the user might have to separately update another executable to get the expected results, and we would have to ensure that different mismatched versions of the executable can work with each other across the host and the container. With a growing number of containers, the former would be a nightmare for the user, while the latter would be almost impossible to test.
Finally, having only one version of the toolbox
executable makes it a lot easier for users to file bug reports. There’s only one version to report, not several spread across different environments.
This leads to another problem
Once you let this sink in, you might realize that bind mounting the toolbox
executable from the host into the Toolbx means that an executable from a newer or different operating system might be running against an older or different run-time environment inside the container. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.
This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.
I will leave you with that thought and let you puzzle over it, because it will be the topic of a future post.
Toolbx is now on Matrix
Toolbx now has its own room on matrix.org. Point your Matrix clients to #toolbx:matrix.org and join the conversation.

We are working on setting up an IRC bridge with Libera.Chat but that will take a few more months as we go through the process to register our project channel.
Toolbx: Red Hat is hiring a software engineer
The Desktop Team at Red Hat wants to hire a software engineer to work full-time on Toolbx (formerly known as Toolbox) with me, and hopefully go on to maintain it in the near future. You will be working upstream and downstream (Fedora and RHEL) to improve the developer and troubleshooting experience on OSTree-based Linux operating systems like Fedora Silverblue and CoreOS, and extend some of the benefits to even traditional package-based OSes like Fedora Workstation.

If you are excited to work across the different layers of a modern Linux operating system, with a focus on container and desktop technologies, and aren’t afraid of getting your hands dirty with C and Go, then please go ahead and apply. Toolbx is a relatively young project with a rapidly growing community, so you are sure to have a fun ride.

Toolbox is now Toolbx
Toolbox is being renamed to Container Toolbx or just Toolbx.
I had always been uncomfortable by the generic nature of the term toolbox
and people keep complaining that it’s terribly difficult to search for. Recently, we have been trying to improve the online presence of the project by creating a website and a Twitter handle, and it’s impossible to find any decent Internet real estate with anything toolbox
.
It looks like dropping the penultimate character from words to form names is a thing these days, hence Toolbx.

We haven’t yet renamed the Git repository or anything in the code or the binary or the manuals. Renaming the binary, for example, has implications for existing containers, and we don’t want to cause any needless disruption for users. So, those will gradually happen over time with all the necessary compatibility aliases and such.
Meanwhile, Fedora Magazine has published an interview with yours truly about Toolbx that talks about the history, latest improvements, future direction, and various other aspects of the project.
It should be obvious, but the Toolbx website was made by Jakub Steiner.
Toolbox — A fall 2019 update
Things have been moving fast in Toolbox land, and it’s time to talk about what we have been doing lately.
New home
Toolbox is now part of the containers organization on GitHub. We felt that the project had outgrown the prototype stage — going by the activity on the GitHub project it’s safe to say that there are at least a few thousand users who rely on it to get their work done; and we are increasingly working towards expanding the scope of the project to go beyond just setting up a development environment.
Housing the project in my personal GitHub namespace meant that I couldn’t share admin access with other contributors, and this was a problem we had to address as more and more people keep joining the project. Over the past year, we have developed a really good working relationship with the Podman team and other members of the containers organization, without whom Toolbox wouldn’t exist, so moving in under the same umbrella felt like a natural next step towards growing the project.
Migration to cgroups v2
Fedora 31 ships with cgroups v2 by default. The major blocker for cgroups v2 adoption so far was the lack of support in the various container and virtualization tools, including the Podman stack. Since Toolbox containers are just OCI containers managed with Podman, we saw some action too.
After updating the host operating system to Fedora 31, Toolbox will try to migrate your existing containers to work with cgroups v2. Sadly, this is a somewhat complicated move, and in theory it’s possible that the migration might break some containers depending on how they were configured. So far, as per our testing, it seems that containers created by Toolbox do get smoothly migrated, so hopefully you won’t notice.
However, if things go wrong, barring a delicate surgery on the container requiring some pretty arcane knowledge, your only option might be to do a factory reset of your local Podman installation. As factory resets go, you will lose all your existing OCI containers and images on your local system. This is a sad outcome for those unfortunate enough to encounter it. However, if you do find yourself in this quagmire then take a look at the toolbox reset command.
Note that you need to have podman-1.6.2 and toolbox-0.0.16 for the above to work.
Also, this is one of those changes where it bears repeating that online RPM package updates are fragile. They are officially unsupported on Fedora Workstation, and variants like CoreOS and Silverblue make it even harder. A cgroups v2 migration is only expected to work on a freshly booted system.
Improvements
The last six months have seen a whole boatload of new features and improvements. Here are some highlights.
On Fedora Silverblue and Workstation, GNOME Terminal keeps track of the current Toolbox container, and just like it preserves the current working directory when opening a new terminal, it’s also able to preserve the Toolbox environment. This is quite convenient when hacking on a Silverblue system, because it removes the extra step of entering a toolbox after opening a new tab or window.
The integration with the host operating system has been deepened. Toolbox containers can now access virtual machines managed by the host’s system libvirt instance, and the host’s ulimits are preserved. The entirety of /dev is made available inside the toolbox as a step towards supporting the proprietary Nvidia driver to enable CUDA for AI/ML frameworks like TensorFlow.
The container’s /run/host now has big chunks of the host’s file hierarchy. This is handy for one-off use-cases which require access to parts of the host that aren’t covered by Toolbox by default.
Last but not the least, Kerberos now works inside Toolbox containers. This will make it easier to contribute to Fedora itself from inside a toolbox.
Fedora Toolbox is now just Toolbox
Fedora Toolbox has been renamed to just Toolbox. Even though the project is obviously driven by the needs of Fedora Silverblue and uses technologies like Buildah and Podman that are driven by members of the wider Fedora project, it was felt that a toolbox container is a generic concept that appeals to a lot many more communities than just Fedora. You can also think of it as a nod to coreos/toolbox which served as the original inspiration for the project, and there are plans to use it in Fedora CoreOS too.
If you’re curious, here’s a subset of the discussion that drove the renaming.
There have already been two releases with the new name, so I assume that almost all users have been migrated.
Note that the name of the base OCI image for creating Fedora toolbox containers is still fedora-toolbox for obvious namespacing reasons, but the names of the client-side command line tool, and the overall project itself have changed. That way you could have a debian-toolbox, a centos-toolbox and so on.
It should be obvious, but the Toolbox logo was designed and created by Jakub Steiner.
Fedora Toolbox — Under the hood
A few months ago, we had a glimpse at Fedora Toolbox setting up a seamlessly integrated RPM based environment, complete with dnf, on Fedora Silverblue. But isn’t dnf considered a persona non grata on Silverblue? How is this any different from using the existing Fedora Workstation then? What’s going on here?
Today we shall look under the covers to answer some of these questions.
The problem
The immutable nature of Silverblue makes it difficult to install arbitrary RPMs on the operating system. It’s designed to install graphical applications as Flatpaks, and that’s it. This has many advantages. For example, robust upgrades.
However, there are legitimate cases when one does want to install some random RPMs. For example, when you need things like *-devel packages, documentation, GCC, gofmt, strace, valgrind or whatever else is necessary for your development workflow. While rpm-ostree does offer a way around this, it’s painful to have to reboot every time you change the set of packages on the system, and it negates the advantages of immutability in the first place.
Containers
By this time some of you are surely thinking that containers ought to solve this somehow; and you’d be right. Fedora Toolbox uses containers to set up an elaborate chroot-like environment that’s separate from the immutable OSTree based operating system.
And once you are down to containers, Docker isn’t far away — surely this can be hacked together with Docker; and you’d be right again. Almost. You can hack it up with Docker but it wouldn’t be ideal.
The problem with Docker is that it requires root privileges. Every time you invoke the docker command, it has to be prefixed with sudo or be run as root. That’s fine if you all you want is a place to install some RPMs. It would’ve required root anyway. However, it’s annoying if you want GNOME Terminal to default to running a shell inside your RPM based development environment. You’d have to enter the root password to even get to an unprivileged shell prompt.
So, instead of using Docker, Fedora Toolbox uses something called Podman. Podman is a fully-featured container engine that aims to be a drop-in replacement for Docker. Thanks to the Open Container Initiative (or OCI) standardizing the interfaces to Docker images and runtimes, every OCI container and image can be used with either Docker or Podman.
The good thing about Podman is that can be used rootless — that is, without root privileges. So, that’s great.
Containers are weird, though
Containers are pretty widely popular these days, but not everybody who is transitioning from the current RPM based Fedora Workstation to Silverblue can be expected to set things up from first principles using nothing but the podman command line. It will surely increase the cognitive load of undergoing the transition, hindering Silverblue adoption.
Even if someone familiar with the technology is able to set things up, pitfalls abound. For example, why is the display server not working, why is the SSH agent not working, why are OpenGL and Vulkan programs not working, or why is sshfs not working, or why LLVM and LibreOffice are failing to build, etc..
Let’s be honest. The number of people who understand both container technology and the workings of a modern graphical operating system well enough to sort these problems in a jiffy is vanishingly small. I know that at least I don’t belong to that group.
Container images are optimized for non-interactive use and size, whereas we are talking about the interactive shell running in your virtual terminal emulator. For example the fedora OCI image comes with the coreutils-single RPM, which doesn’t have the same user experience as the coreutils package that we are all familiar with.
So, it’s clear that we need a pre-configured, and at times, opinionated, solution on top of Podman.
The solution
Fedora Toolbox starts with the similarly named fedora-toolbox OCI image hosted on the Fedora Container Registry. There’s one for every Fedora branch. Currently those are Fedoras 28, 29 and 30. These images are based on the fedora image, with an altered package set to offer an interactive user experience that’s similar to the one on Silverblue.
When you invoke the fedora-toolbox create command, it pulls the image from the registry, and then tailors it to the local user. It creates a user with a UID matching $UID, a home directory matching $HOME and the right group memberships; and it ensures that various bits and pieces from the host, such as the home directory, the display server, the D-Bus instances, various pieces of hardware, etc. are available inside the container. These customizations are saved as another image named fedora-toolbox-user. Finally, an OCI container, also named fedora-toolbox-user, is created out of this image.
If you are curious, run podman images and podman ps –all to verify the above.
Once the toolbox container has been created, subsequent fedora-toolbox enter commands execute the users preferred shell inside it, giving the impression of being in an alternate RPM flavoured reality on a Silverblue system.
If you are still curious, then open /usr/bin/fedora-toolbox and have a peek. It’s just a shell script, after all.
Fedora Toolbox — Hacking on Fedora Silverblue
Fedora Silverblue is a modern and graphical operating system targeted at laptops, tablets and desktop computers. It is the next-generation Fedora Workstation that promises painless upgrades, clear separation between the OS and applications, and secure and cross-platform applications. The basic operating system is an immutable OSTree image, and all the applications are Flatpaks.
It’s great!
However, if you are a hacker and decide to set up a development environment, you immediately run into the immutable OS image and the absence of dnf. You can’t install your favourite tools, editors and SDKs the way you’d normally do on Fedora Workstation. You can either unlock your immutable OS image to install RPMs through rpm-ostree and give up the benefit of painless upgrades; or create a Docker container to get an RPM-based toolbox but be prepared to mess around with root permissions and having to figure out why your SSH agent or display server isn’t working.
Enter Fedora Toolbox.
It makes it trivial to get a mutable development environment on Silverblue:
[rishi@bollard ~]$ fedora-toolbox create
[rishi@bollard ~]$ fedora-toolbox enter
🔹[rishi@toolbox ~]$
It uses OCI containers underneath, but takes away the cognitive overhead of thinking about containers by providing a seamless integration with the host environment. It uses rootless podman and buildah, so there’s no root in the picture either.
If you are going to try it out, make sure that you have the runc-1.0.0-56.dev.git78ef28e package in your Silverblue image. There’s also an ongoing review to get fedora-toolbox added to Fedora. If you don’t feel comfortable mucking around with rpm-ostree on the command-line, then fear not. Very soon all the necessary pieces will be part of the OS image, making it that much easier to start hacking on your Silverblue.
You must be logged in to post a comment.