sunnuntai 12. elokuuta 2018

Implementing a distributed compilation cluster

Slow compilation times are a perennial problem. There have been many attempts at caching and distributing the problem such as distcc and Icecream. The main bottleneck on both of these is that some work must be done on the "user's desktop" machine which is then transferred over the network. Depending on the implementation this may include things such as fully preprocessing the source file and then sending the result over the net (so it can be compiled on the worker machine without needing any system headers).

This means that the user machine can easily become the bottleneck. In order to remove this slowdown all the work would need to be done on worker machines. Thus the architecture we need would be something like this:


In this configuration the entire source tree is on a shared network drive (such as NFS). It is mounted in the same path on all build workers as well as the user's desktop machine. All workers and the desktop machine also must have an identical setup, that is, same compilers and installed dependencies. This is fairly easy to achieve with Docker or any similar container technology.

The main change needed to distribute the work is to create a compiler wrapper script, much like distcc or icecc, that sends the compilation request to the work distributor. It consists only of a command line to execute and the path to run it in. The distributor looks up the machine with the smallest load, sends the command, waits for the result and then returns the result to the developer machine.

Note that the input or output files do not need to be transferred between the developer machine and the workers. It is taken care of automatically by NFS. This includes any changes made by the user on their local checkout which are not in revision control. The code that implements all of this (in an extremely simple, quick, dirty and unreliable way) can be found in this Github repo. The implementation is under 300 lines of Python.

Experimental results

Since I don't have a data center to spare I tested this on a single 8 core i7 computer. The "native OS" ran the NFS server and work distributor. The workers were two cloned Virtualbox images each having 2 cores. For testing I compiled LLVM, which is a fairly big C++ code base.

Using the wrapper is straightforward and consists of setting up the original build directory with this:

FORCE_INLINE=1 CXX='/path/to/wrapper workserver_address g++' cmake <options>

Force inline is needed so configuration tests are run on the local machine. They write to /tmp, which is not shared and the executables might be run on a different machine than where they are compiled leading to failures. This could also be solved by having a shared temporary folder but that would increase the complexity of this simple experiment.

Compiling the source just over NFS in a single machine using 2 cores took about an hour. Compiling it with two workers took about 47 minutes. This is not particularly close to the optimal time of 30 minutes so there is a fair bit of overhead in the implementation. Most of this is probably due to NFS and the fact that absolutely everything ran on the same physical machine. NFS also had coherency problems. Sometimes some process invocations could not see files created by their dependency tasks. The most common case was linker invocations, which were missing one or more object files. Restarting the build always made it pass. I tried to add sync commands as necessary but could not make it 100% reliable.

Miscellaneous things of note

In this test only compilation was parallelised. However this same approach works with every executable that is standalone, that is, it does not need to talk to any other ongoing process via IPC. Every build system that supports setting the compiler manually can be used with this scheme. It also works for parallelising tests for build systems that support invoking tests with an arbitrary runner. For example, in Meson you could do this:

meson test --wrapper='/path/to/wrapper workserver_address'

The system also works (in theory) identically on other operating systems such as macOS and Windows. Setting up the environment is even easier because most projects do not use "system dependencies" on those platforms, only the compiler. Thus on Windows you could mount a smb drive with the code on, say, D:\code on all machines and, assuming they have the same version of Visual Studio, it should just work (not actually tested).

Adding caching support is fairly easy. All machines need to have a common directory mounted, point CCACHE_DIR to that and set the wrapper command on the desktop machine to:

CXX='/path/to/wrapper workserver_address ccache g++'

torstai 26. heinäkuuta 2018

Building native multiplatform GUI apps with Meson

A recent trend in multiplatform GUI applications is to create the core business logic of the application in something like C++, have it (optionally) expose a plain C interface and then create a gui on top of that using the native widget set of each supported platform. This means that the application uses GTK on Linux and other unixes, Cocoa on macOS, win32 API on Windows, Java widgets on Android and so on. This makes the application fully native on all platforms. The tradeoff is having to write the gui multiple times against not having to wrangle a multiplatform widget toolkit as your dependency.

Regardless of how you build your guis you need to have a build system that can build the application under all these different environments from a single code base. To this end I created a sample application called Platypus, which can be downloaded from this Github repo.

The code and compilation

The application itself is extremely simple. It consists of one shared library that returns a random number between 0 and 100 when called. It is implemented using C++ 11's random number generator functionality to ensure each platform has a toolchain new enough to handle it. The GUI applications built on top of it have a text label and a button. Pressing the button updates the text label with a new random number. There is also a test program that verifies that the library is working.

The GTK version is a plain C application. The gui is defined using a Glade interface definition file rather than building it by hand.

The macOS version has a GUI written in Objective C. The gui is defined as a XIB file created with XCode. It is built into a standard app bundle.

The Windows application is written in C++ (though it does not really use any C++ features) and has a gui laid out by hand.

All these guis have full platform integration with icons, an Info.plist, .desktop files and so on.

The installers

The GTK version can be built as a Flatpak in the usual way. The build manifest can be found in the repository's root.

The macOS version builds a standard .dmg installer that can be directly shipped to end users.

The Windows version builds an .MSI installer providing complete install/uninstall integration.

How complicated is it?

The entire build definition consists of 107 lines of Meson.

Screenshots

Here is the plain GTK version running as a Flatpak application on Kubuntu. Window icons and desktop integration work as you would expect.


Here is the macOS version showing the drive image, the installer window and the application running with proper platform integration.


Finally here is the Windows application showing the installed path location under Program Files, the application itself and the automatic integration to Windows' application uninstaller system.


Future plans

It would be cool to add an Android application as well as an iOS application written in Swift in the code base. Patches are welcome as always.

keskiviikko 25. heinäkuuta 2018

Why Git is terrible in four pictures

I was asked to write a blog post on why I dislike Git in general and its UI in particular. Here is a representative sample in four images.

Recently a pull request was filed that looked like this:


As you can see there is an extra merge commit. As is customary we wanted to get rid of it to get a clean rebase based merge history. To do that you'd first get a checkout of the code and look at the log, which looks like the following.


So far, so good. Now let's do a rebase --interactive. It looks like this:


Suddenly Git has chosen to silently remove the merge commit from this list. Why? I have no idea. The commit had changes in it, so it was not pruned because it was empty. If you then exit the editor without any changes (which usually means "do not change anything") then the commit is deleted and any changes that were in it are gone:


If your latter commits built on those changes, you get yummy merge conflicts for something that is conceptual a no-op.

This is the essence of working with Git. Most of the time it works sort of ok, but every now and then it will, without any warning or reason, completely screw you over, destroy your data and leave you stranded, forced to debug your way out of the resulting mess without any help.

"Of course it breaks, you should have used --do-not-do-the-idiotic-wrong-thing-which-for-some-reason-is-the-default command line option, everyone knows that, duh!"

A common kneejerk response to these kinds of problems is that it is somehow the user's own fault and that they should have memorized every quirk in the software in order to use it correctly (or at all). I'm certain some of you out there on the Internet had already started writing a strongly worded message to let me know that. Don't bother.

Whenever you have a piece of software that silently destroys user data, the fault always, always, ALWAYS lies with the program. Even if "it only happens rarely". Even if you think "it's the user's fault". Even if you personally know how the problem could have been avoided. The flaw is ABSOLUTELY ALWAYS in the software. Never in users. Ever.

Any attempt at shifting the cause to the user, for whatever reason, is victim blaming. Don't do it.

maanantai 16. heinäkuuta 2018

How expensive is globbing for sources in large projects

A common holy war in build systems is whether you should explicitly list all sources that make up a target or use a globbing pattern. There are both technical and non-technical arguments on both sides. The latter mostly deal with reliability and flexibility vs convenience. In this post we are going to ignore them completely and instead focus on the technical parts, specifically the overhead of globbing. The measurement script used can be downloaded from this repo.

In this test we used the full checkout of Chromium source code. The tests were run under Windows, since it is noticeably slower than Linux on both file operations and process invocations. The task simulation consists of roughly three parts:

  1. Scan the source tree for all directories that contain sources
  2. Generate glob patterns for detected directories (corresponding roughly to "one target for all sources in one directory")
  3. Run the globs
This ignores a bunch of steps, such as serialising the glob results to files and calculating the delta between two glob sets. These are probably fairly fast compared to file access operations, though.

Scanning the source tree and generating the globs

There is no direct correlation between this step and a regular build system. It is mostly interesting as a comparison between file operations between a hot and a cold cache. Running the scan on a cold cache takes 2 minutes but for a warm cache about 6 seconds.

Since this step is always run first, the following tests are all operating with a hot cache.

The actual globbing

Running all globs on the Chromium source tree takes between 2 and 6 seconds. This is the absolute lowest time that can be obtained for a no-op build without daemons because all globs must be re-evaluated every time.

The rule of thumb for UI design is that everything under one second is perceived as instantaneous. This means that for these sizes globbing causes a noticeable delay. Whether this is seen as insignificant or aggravating depends on each user.

Extra bonus: C++ modules

Since we have the measurement script, let's use it for something more interesting. Modules are an upcoming C++ feature to increase build times and a ton of other coolness depending on who you ask. The current specification works by having a kind of "module export declaration" at the beginning of source files. The idea is that you first compile those to generate a sort of a module declaration file and then you can start the actual compilation that uses said files.

If you thought "waitaminute, that sounds exactly like how FORTRAN is compiled", you are correct. Because of this it has the same problem that you can't compile source files in an arbitrary order, but instead you must first somehow scan them to find out the interdependencies between source (not header) files. In practice what this means is that instead of single-phase compilation all files must be processed twice. All scan operations must be done before any compilation jobs can start because otherwise you might start to compile a file before its dependencies are fully processed.

The scanning can be done in one of two ways. Either the build system scans the sources meaning it needs to understand the syntax of source files or the compiler can be invoked in a special preprocessing mode. Note that build systems such as Ninja do not do any such operations by themselves but instead always invoke external processes to do their work.

Testing the performance impact of these two is straightforward. The first one can be done by reading the first ten lines of each source file and then throwing them away. Measuring this time gives a fairly good estimate of the file processing overhead. The second way can be measured by doing the exact same thing but also invoking the compiler with no-op command line arguments to get the process invocation overhead.

Scanning the files directly takes roughly 120 seconds. For an 8 core machine this means a 15 second delay (at minimum) before any compilation tasks can begin. This is not great but for a full build it should be tolerable.

When spawning a compiler process the same operation takes 69 minutes. This is intolerably slow and would require an order of magnitude speedup in compilation times to be worthwhile. Unlike regular compilations, dependency scanning can not be sped up with unity builds because the specification requires that the module declaration must be at the very beginning of source files (and presumably there can not be more than one in a single TU).

keskiviikko 13. kesäkuuta 2018

Easy MSI installer creator

Shipping programs on Windows platforms becomes a lot simpler (especially in corporate environments) if you can create an MSI installer. The only Free software solution for that is the WiX installer toolkit. The fairly big downside to this is that it very much tied to how Visual Studio does things with GUIDs and all that. The installer's contents and behavior is defined with an XML file whose format is both verbose and confusing.

Most Unix developers, once faced with this, will almost immediately blurt out something like "Why can't I just do DESTDIR=c:\some\path ninja install and have it make an installer out of the result?" So I created a script that does exactly that.

The basic usage is simple. First you do a staged install into some directory and create a JSON file describing the installation that would look like this:

{
    "update_guid": "YOUR-GUID-HERE",
    "version": "1.0.0",
    "product_name": "Product name here",
    "manufacturer": "Your organization's name here",
    "name": "Name of product here",
    "name_base": "myprog",
    "comments": "A comment describing the program",
    "installdir": "MyProg",
    "license_file": "License.rtf",
    "parts": [
        {"id": "MainProgram",
         "title": "Program name",
         "description": "The MyProg program",
         "absent": "disallow",
         "staged_dir": "staging"
        }
    ]
}

Running the script would then create a standalone MSI installer with the contents of the staging directory.

Multiple components in one installer

Some programs ship with multiple parts that the user can choose whether to install each part. This is supported by the script. First you must split the files in multiple staging directories, one per component and then add entries to the parts array. See the repository for an example.

maanantai 23. huhtikuuta 2018

Dependencies with code generators got a lot smoother with Meson 0.46.0

Most dependencies are libraries. Almost all build systems can find dependency libraries from the system using e.g. pkg-config. Some can build dependencies from source. Some, like Meson, can do both and toggle between them transparently. Library dependencies might not be a fully solved problem but we as a community have a fairly good grasp on how to make them work.

However there are some dependencies where this is not enough. A fairly common case is to have a dependency that has some sort of a source code generator. Examples of this include Protocol Buffers, Qt's moc and glib-mkenums and other tools that come with Glib. The common solution is to look up these binaries from PATH. This works for dependencies that are already installed on the system but fails quite badly when the dependencies are built as subprojects. Bootstrapping is also a bit trickier because you may need to write custom code in the project that provides the executables.

Version 0.46.0 which shipped yesterday has new functionality that makes this use case noticeably simpler. In Meson you find the code generator scripts to run with the find_program command like this:

mkenums_exe = find_program('glib-mkenums')

This will find the executable from the system. However if you have built Glib as a subproject, then it can issue the following statements (this is not in Glib master yet AFAIK so it does not work, this is more of an illustrative example):

internal_mkenums_exe = <command to generate the mkenum script>
meson.override_find_program('glib-mkenums', internal_mkenums_exe)

After this issuing find_program('glib-mkenums') no longer goes to the system, but instead returns the internal program. Meson's internal helper modules have also been updated to always find the programs they use with find_program. This means that all projects using Glib functionality can be built without needing a system wide install of Glib. Even more importantly this requires zero changes in existing projects. It will just work out of the box. You can even use Glib helper code when building Glib itself.

This is especially convenient when you need a newer version of any dependency than your distro provides and especially on platforms such as Windows where "distro dependencies" do not exist.

As an example of what is possible, Nirbheek has managed to bootstrap GStreamer on Windows using nothing but Visual Studio, Python 3, Ninja and Meson. The main limitation currently is that the overriding executable may not be a build target (i.e. something you build from source with a compiler) because the result of find_program may be used during the configuration phase, before any source code compilation has taken place. We hope to remove this limitation in a future release.

sunnuntai 8. huhtikuuta 2018

Cookie purging the simple way

Getting rid of cookies (especially tracking and ad cookies) consistently is a good thing. However it turns out to be a bit tricky because you don't want to get rid of session cookies for sites you care about. Basically what you want to achieve is this:

  1. Store all cookies as normal
  2. Maintain a whitelist of servers that are allowed to store persistent cookies (usually for sites such as Github, Reddit, Twitter and the like)
  3. At regular intervals (preferably every time the browser is closed), delete all cookies not whitelisted.
There are browser extensions to do this but they are often bizarrely complex and even those that aren't are inconvenient to use as they require installing plugins, clicking through menus and so on. Firefox should have builtin functionality to do this also, but I read through instructions online on how to do it and could not understand how you should set it up to get it to work.

Thus as an experiment I wrote a Python script to do this, it is available in this Github repo. Using it is simple:

  1. Write a whitelist file consisting of one hostname per line. (all subdomains of the specified host are also permitted)
  2. Shut down Firefox.
  3. Run the script.
  4. Start Firefox.