I develop CI software. I use the CI software I develop. This means I run several CI servers ("CI nodes" in Radicle parlance), currently four. When I make a release, I deploy the new version to all and trigger CI to run on all repositories on each.
This means I wait for CI to finish a lot. But unlike Godot, the wait is not a metaphor, a philosophical puzzle, or an artistic goal in and of itself. It's just a part of the development process.
Because I wait for CI to finish so much, two of the servers are a lot more powerful than would be strictly needed. One only has three repositories. One is slower and has the most repositories, but is also the least important one. So I wait for the first three, before concluding things work, and let the last one finish in its own time.
My CI servers are called ci0, ci1, ci-private (all non-public), and callisto (https://callisto.liw.fi/). I have no imagination.
Running this much CI also makes me more sensitive to flaky tests than most developers. CI failed? Is that because there's a bug in the software under test, in my CI system, or is the test just flaky? A flaky test can cause me to waste hours of time and debug something that isn't a flaw in my software. I do not like that. I do not like that at all.
Four CI servers, 153 CI runs to make sure new release works. The first three servers finish in less than 15 minutes, the slow one takes about an hour.
This little mini-PC runs two of the servers (the slow one, plus the one with just three repositories). They're virtual machines on this box.
(Excuse the dust. I've cleaned since taking the photo. The machine has also moved to a shelf and the cables on my desk are now somewhat managed. I have excuses.)