Please note that this is all historical context, as I have not been running the Gentoo Tinderbox for over ten years now.

I have blogged extensively about my tinderbox efforts. If you still have doubts what this is about, I'm going to explain it to you here.

The name tinderbox is derived from one of the first public-result continuous integration systems, developed by Mozilla for their suite, and then extended to cover Firefox, Thunderbird and the rest of their products. For some reasons, the name tinderbox has been used by most of the continuous integration or automated testing tentatives in the Gentoo Linux project, by a wide range of developers.

In this case, I'm referring to my personal tentative to improve Gentoo Linux's quality by building and running the tests of all the packages in the Gentoo Portage Tree over and over. The tinderbox is running on a system I own, but was paid for mostly through donations from the community, was for a while hosted by an employer of mine, and is now hosted at my own expenses.

Setup and Source Code

The tinderbox is implemented based on Linux Containers, which are little more than chroots on steroids. It consists of a filesystem with a basic install of Gentoo, where all the packages are, one by one, installed. The scripts executed are published on Gentoo's GIT repository; they have been mostly written by me and Zac Medico and are all released under ISC-style license (a very permissive license). The logs produced by the multiple tinderbox instances are then analysed by a different set of scripts which are also public, and stored in Amazon's S3 service, to be accessed when filing bugs.

A common setup for any tinderbox altogether is present, but most of the details can be changed from instance to instance; architecture, unmasked (and masked) packages, USE flags are all customisable.

To be able to catch the widest amount of troublesome ebuilds, the tinderbox is an isolated instance. There is no IPv4 connectivity outward, and even IPv6 is limited to its own host system. The host system provides the storage space (via bind mounts, no network involved), an RSync instance and a non-caching Squid proxy, both via IPv6 over a public, static address (the address is filtered through iptables so that only SSH is accessible on the instances themselves).

The Squid proxy is necessary to allow access to source tarballs on the distfiles mirrors. HTTP proxy settings are not exported in the environment but are rather set individually in the calls to curl when called by Portage to fetch the sources, this way those packages that would try fetching data from the network at build time will fail.

A positive side in forgoing all the direct outside connectivity is that there is no need for any software to manage the network inside the container, as LXC can easily set up the IPv6 address and routes from the outside and then drop all the networking capabilities for the container itself. Using public IPv6 and the public hostnames also allows the analysis script to have a quick way to identify hosts, as well as making it very easy to set up SSH proxy hosts.

Results and Statistics

The original intent was for the tinderbox to identify packages failing to build with --as-needed link editor option, so that it could be added to the default flags in Gentoo. Since then it has been used for a number of other information gathering operations:

  • It keeps testing the ~amd64 version of the tree, making sure that newly-introduced software builds as intended with --as-needed. A separate instance is running stable, hardened amd64.
  • It is used to test important system packages (GCC, Binutils Autoconf, Automake, Make, Libtool, kernel headers, GLIBC, …) before they reach ~arch users.
  • Reports packages installing pre-stripped ELF files, files built without respecting the LDFLAGS variable, .la files that are not used, or simply install files in the wrong paths.
  • Reports packages failing to run self-tests with src_test().
  • Identifies packages failing to build with parallel make, since it is set to use 24 jobs per build.
  • Can identify automagic dependencies, not listed in ebuilds but still picked up, as well as unsolved collisions between packages, as each package is merged in a single system.
  • A special aimed list can be generated to check for reverse dependencies of new API-breaking libraries and tools. This has been used to prevent introducing huge regressions when moving to new major version of libraries, such as libpng and Boost.

At the time of writing, the tinderbox has been used to generate most of the bugs in blockers such as:

The tinderbox, running on the current system as described in the next paragraphs, is able to build an average of about 450 packages a day; with the current count of over 14000 packages in the main tree, it takes over a month to properly rebuild all the packages. Some packages might take just a minute or two to merge, but others, such as dev-libs/boost take over eight hours to run their full test suite.

Hardware Specs

As of April 2012 a new server is taking the place of my previous Yamato system. The new host, called Excelsior has been paid for by the community, through Pledgie. Thanks to all the contributors.

Excelsior is a dual-CPU, sixteen-core system, for a total of 32 AMD Opteron 6721 cores running at 2.1GHz (on average), 64GiB RAM, and a number of SATA hard drives. The tinderbox itself takes about 100GB of disk space for the install, plus some other 200GB for the local copies of the distfiles (the source and patches archives used by the ebuilds). Most of the builds are done in RAM, when possible.

With this configuration, a few packages still take a long time to run, simply because they do not build in parallel. But this is not an hardware issue as much as a software one.