keskiviikko 28. helmikuuta 2018

On the unoptimalities of language specific build systems

A fairly big recent trend has been the emergence of new programming languages that are meant to be compiled into machine code. The silent (and sometimes not so silent) goal of these languages has been to replace C and C++ as the dominant systems programming language.

All of these languages come with their own build system and dependency management optimised for that particular language. This makes sense as having a good developer experience is important and not having 20-30 years of legacy to carry with you means you can design and develop slick systems relatively easily. But, as always, there is a downside. Perhaps the main issue comes up pretty quickly when trying to combine said code with projects in other languages.

A common approach is for the programming language in question to bundle up all its dependencies as source in a big clump. Then the advocates will say that "it's simple, just call our build system from yours and it gets built". This seems simple but it uses the weasieliest of all weasel words: just. Whenever someone tells you to "just" do something, what they almost always do is trying to trivialise away the hardest part of the entire operation. So it is here as well.

When could it work?

There is one case where this approach works without problems. That is when the dependency builds into a single library with a C interface and it also ships the header and a pkg-config file to use it. This case is indistinguishable from a plain C library so it will work exactly the same. The dependency can be provided as a system package or built as a dependency in a Flatpak manifest or any other similar issue.

Unfortunately this system breaks down the second you want to do anything else. The most common requirement is to build all dependencies from source in a single build step. This is necessary on any platform that does not have a concept of "system" package manager. Many people also want to do this on Linux systems to, for example, build their project's trunk against their dependencies' trunks. This is where things fall down.

The myth of the build dir

Most people probably haven't thought about the build directory of their builds. The most common conception is that the build system just (there's that word again) compiles source code into object files and then targets and that the installation step merely copies the files out to the staging directory. This is not true in the slightest.

Build systems need to do a whole lot of stuff to make things workable directly from the build tree. Every build system does it slightly (and sometimes massively) differently. More importantly the way each build system does it is not stable. They are allowed to, and will, change the way the build tree is laid out at any time. Nothing inside the build tree is stable, not file formats, not directory layouts, nothing.

The problem with building source code with two different build systems in a single build is that eventually they need to work together. Libraries need to be linked. Sources need to be generated. Executables need to be run. That means joining two different completely unstable elements together. The simplest problem in this space is about file layouts. Every build system expects a certain layout for the files it manages. This is usually very different from other build systems. Thus in order to work, there would need to be a way for every build system to be told to adapt to a different system's file layout when run as its subtask.

This is a challenging place to be requesting, because it takes a lot menial work that build systems have traditionally (ever, actually) been unwilling to do. Guessing the subtask's layout and hoping that it does not change might work for any amount of time and then breaks for the slightest of reasons. The problems only get harder from there.

N^2 manual work algorithms are awesome!

Even if this would work (and it does not) the next problem comes from scaling up. You can only "just call" from one build system to another if someone has taken the time to make one understand the other. This is simple for two build systems: you need to write two integrations, one in each direction. But suppose we live in a world where many of the common C libraries in use today have been replaced by implementations in another languages. If you were doing cross platform mobile development then you could have C, C++, Java, D, Rust, Go and Swift in the same project.

Seven languages means seven different build systems and possibly more since C and C++ commonly have more than one dominant build system. This means reading and understanding seven different build system syntaxes and mental models. If you want to combine those freely it means writing 7 x 7 = 49 different build system integrations who must, lest it be forgotten, combine the unstable innards of all of these. And then it gets worse.

Since every language has its own package manager and dependency downloader, you now have up to seven package managers in your project. Actually no, that is a lie.

The tangled web of lies and deceit dependencies

When talking about dependencies between projects in different languages, most people usually mean a dependency graph like this.

That is, there is one dependency of a single language and a second one of a different language that uses it. For this simple case most things are feasible. But let's see what happens when we add just one more project.

Here we have project 1 using language 1. It has a dependency to project 2 in language 2. However project 2 has an internal dependency on project 3 which is also written in language 1. The question now becomes: how should this be built?

Since languages 1 and 2 use their own build tool and language manager, the two edgemost projects don't know that they are being built as part of the same project. Language 2 completely hides its dependency, as it should. The two projects need to work independently. This means that each one of them must determine its dependencies in isolation. If they download their dependencies during configuration time then for each build setup you are accessing the dependency provider twice. Doing more dependency resolutions than you have languages in your project seems suboptimal.

The other approach to this is usually called vendoring. In this each project in a language is only used as a tarball and it embeds all its own dependencies as source code. This seems like a working solution but it's not really. Many modern languages go the NPM route where it is considered good practice to have many small dependencies. It is not uncommon for medium or even small projects to have 50+ dependencies. This leads to problems such as these:


Here project 1 depends on two different projects that are both implemented in language 1. Just like above these two projects don't know of each other because their dependency chain goes via language 2 that hides it. Both of these projects have their internal dependencies embedded so they can be built from scratch without problems.

The problem here is that due to basic popularity and probability theory, the embedded dependencies of these two projects have many of the same dependencies. The dependencies might have the same versions or they might differ. If they both end up in the toplevel executable you get, depending on your toolchain and the phase of the moon, either a working binary or the nastiest of linker bugs to fix.

Even if this yields a working program there is a big downside: compilation time takes up to twice as long because you have to compile the same dependencies twice in different but isolated parts of the build tree. As a rough approximation this means that adding a dependency to a dependency graph like this goes from being a O(1) more work to being O(N) more work because dependency graphs can not be deduplicated if there is a dependency of a different language between the two. It is left as an exercise to the reader to visualize what this would look like on a huge project such as Chromium.

The simple solution

There is a simple solution to this problem and it is very popular among language zealots: reducing the number of languages to one by claiming that in the future everything will be written in their own favourite language. It does not matter what the growth rate of complexity is if it will only be evaluated for the value 1.

The reduction of programming languages to one is expected to happen any minute now, immediately after mr Godot brings us the news on Eastasia's surrender.

The real problem

All of this boils down to the fact that language specific build systems are two opposing things at the same time. They are both a very comfortable gilded cage and an extremely isolating silo. They fertilise and promote cooperation within their own group but make things a lot harder for cooperation between groups.

One of the things we learn from history is that people who have opposed cooperation have, ultimately, lost to those who have promoted it. Maybe we should heed the teachings of history and start working towards better, more encompassing dependency management.

sunnuntai 18. helmikuuta 2018

Projects and features Meson could use help with

A question I was asked during my LCA2018 presentation was how people could help the Meson project. I could not come up with proper projects off the cuff, so here are a bunch of things that have come up since. Feel free to contact us via IRC, email or any other medium if you wish to contribute.

WrapDB wrangler

WrapDB provides a simple way to download source dependencies automatically. Basically it takes an upstream release tarball, adds Meson build files to it if needed and publishes the result on the web. The work consists mostly of reviewing and merging submissions from the community. Creating your own is also fine. This is a fairly lightweight task, only requiring actions every now and then (submissions come less than once a week, typically).

CI fixer upper

For CI we use the free tiers of Travis and Appveyor. This works fairly well but it is very slow because our testing matrix is huge. Running the full test suite through AppVeyor takes about an hour. This slows us down a fair bit and in addition both CI providers have a nasty habit of breaking down fairly often. We also don't want to do priced tiers because they get ridiculously expensive for our usage pattern (as in, a few months of paid for macOS would cost more than a brand new Mac Mini).

We don't have any good ideas on how to make this better. If you do, let us know.

Large scale regression tester

Meson is being used by a fairly large number of projects. This makes fixing bugs and refactoring code challenging because there is the possibility of regressions. It would be nice if we could do something similar to Rust developers and rebuild all or a large fraction of projects using Meson with the trunk version every now and then.

XCode backend improvements

The XCode backend is currently a bit crappy. The main reason for this is that the XCode project file format is awful in many ways. The two main reasons being that it is completely undocumented and the fact that it is not really a file format as such, but more of a memory dump of XCode's internal data structures. But if you are the sort of person who enjoys a challenge of battling against windmills, this might be for you.

Meson build file rewriter

Integration with IDEs and the like is important and we want to provide tools for operations such as "add source file X to target Y" so everyone does not have to write their own implementations. There is actually code for this in trunk but it is quite limited and has bitrotted a fair bit. Resurrecting and making the code actually work would be very welcome.

Introspect improvements

This one also aims to improve the IDE integration features of Meson. As an example you can only get information about build targets one by one. This means that getting the information from a project that has thousands of targets takes forever. We really need a batch exporter so IDEs can grab all necessary project information in one go. There are probably a bunch of other things to improve as well.

Could these be done as part of gsoc/outreachy/other?

Possibly. Meson is not really an "entity" in the gsoc sense but we could potentially get something accepted under the Gnome umbrella. However anyone is welcome to submit patches, obviously, and several of the topics listed above are not nicely self-contained projects that would fit in the gsoc mold at all.

perjantai 16. helmikuuta 2018

Automatically finding slow headers in C++ projects

A common problem in older C++ codebases is that sources compile slowly due to massive header includes. Headers include other headers, which include even more headers and then, somewhere in the guts of the system, someone includes a header that is very slow to parse. Now things are slow and nobody really knows why.

Trawling through the header soup manually is not feasible. Even if you were to manually inspect the headers, it is difficult to know which are the slow ones. Educated guesses can be made, such as anything having the word "boost" in its name is slow, but this only gets you so far. Fortunately it turns out that it is fairly straightforward to write a tool to find the slow ones automatically.

We need two things to be able to reliably measure the inclusion time breakdown of the headers of any source file.

  1. The transitive list of all header files it includes.
  2. The exact compiler flags used to compile the source.
The former can be obtained from a dependency file that the compiler can be told to generate during compilation (and which almost all modern build systems use by default). The latter can be obtained from the compilation_commands database which is also generated by most build tools today. The actual algorithm is simple: for each dependency header, create a dummy cpp file that just #includes that header, compile the source and measure the time it took.

I created a repo with the measurement script and a sample project to test it on. It has one source file and a few internal headers that include external headers. Here's the top part of its output:

0.5875 ../h1.h
0.5254 /usr/include/c++/7/regex
0.2779 /usr/include/c++/7/shared_mutex
0.2747 /usr/include/c++/7/condition_variable
0.2685 ../h2.h
0.2563 /usr/include/c++/7/locale
0.2445 /usr/include/c++/7/sstream
0.2337 ../h3.h
0.2330 /usr/include/c++/7/iostream
0.2329 /usr/include/c++/7/istream

Iostream has been traditionally considered to be big, bloated and slow to compile. However in this simple example we find that shared_mutex is even slower.

There are, of course, many caveats with this method. The main one being that this does not measure the code generation time, only parsing time. These two are usually highly correlated, though.

keskiviikko 14. helmikuuta 2018

Meson's dependency manager in action building GTK

One of the greatest things about creating software is seeing other people pick it up and run with it. Here is a great example of GTK's new development experience using Meson subprojects to automatically obtain dependencies.

It is easy to see how this makes it easier for newcomers to participate. There are no longer pages upon pages of instructions on how to set up a build environment and so on. All that is required is to clone one Git repo and start building. The build system will take care of all the rest.

The eventual goal is to be able to build the entire stack fully from scratch on any platform, even Windows with the Visual Studio compiler. Unfortunately there are still a few missing features but we'll get them added at some point.

perjantai 9. helmikuuta 2018

Looking inside a Linux powered slot machine

In my day job I work as a consultant. This means that I get to see all kinds of interesting things. One of them is this piece of hardware here:


This is a slot machine as operated by Veikkaus, which is the state run corporation operating all gambling services in Finland. There are roughly 20 000 slot machines in use in Finland currently. This is interesting on its own, but things get really fun when you look on the inside.


A fair fraction of the insides is taken by machinery that deals with coins. When a coin is inserted in the machine it first goes in the coin acceptor, which is marked with a green box in the image. It detects the type of the coin. Each denomination has its own exit chute. Bad coins are rejected from the machine while sorted coins get passed into coin hoppers (marked in red).

A coin hopper is basically a bowl of coins and a mechanism that is cabable of ejecting coins from it one by one. When you think of slot machines, you are probably thinking of the sound they make when start spitting out tons of coins after a jackpot. Coin hoppers are what create that particular sound. I recommend looking up videos on Youtube if you are interested in mechanical engineering, because the way they work is kind of fascinating.

The slot machine also accepts notes and debit card payments but these are mechanically much simpler and don't take much space. The only thing remaining in the picture is the box marked in yellow. It contains the actual brains of the entire machine.

The contents of the brain

The main system is, much like everything these days, a regular computer. This specific one is a fairly average industrial PC that is running a custom version of Debian. At boot it starts up the game software that is based on a custom version of the Ogre 3D graphics engine. The computer also manages and controls all other hardware in the cabinet, such as the coin hoppers and note acceptor mentioned above, using a custom, self designed controller board. The cabinet housing the device is also custom designed and built.

Thus, surprisingly, at its core a slot machine is roughly the same as a desktop PC running desktop games with a few extra peripherals. This means is that Linux desktop gaming has been mainstream among the general Finnish population for 15 years, which is roughly the amount of time these slot machines have been deployed.

In addition to the games themselves, the development environment is also 100% Linux. As a demonstration, here is a screen shot of a development version of the software running on a developer workstation.

What about the money?

Like all forms of gambling, slot machines make quite a lot of money. The yearly profits, as of last count, were on the order of 500 million euros per year. As Veikkaus is a government run business, this money is given out to various charitable organisations as well as to the state. Given that Finland's yearly budget is on the order of 50 billion euros, this means that profits from Linux desktop gaming account for almost 1% of the entire budget of the state of Finland.

Acknowledgements

Thanks to Veikkaus for giving me permission to write this blog post. Extra special thanks for allowing to show the picture of the insides of a slot machine, which has never before been shown in public.