Sunday, April 11, 2021

Converting a project to Meson: the olc Pixel Game Engine

Meson's development has always been fairly practical focusing on solving real world problems people have. One simple way of doing that is taking existing projects, converting their build systems to Meson and seeing how long it takes, what pain points there are and whether there are missing features. Let's look at one such conversion.

We'll use the One Lone Coder Pixel Game Engine. It is a simple but fairly well featured game engine that is especially suitable for beginners. It also has very informative YouTube channel teaching people how to write their own computer games. The engine is implemented as a single C++ header and the idea is that you can just copy it in your own project, #include it and start coding your game. Even though the basic idea is simple, there are still some build challenges:

  • On Windows there are no dependencies, it uses only builtin OS functionality but you need to set up a Visual Studio project (as that is what most beginners tend to use)
  • On Linux you need to use system dependencies for the X server, OpenGL and libpng
  • On macOS you need to use both OpenGL and GLUT for the GUI and in addition obtain libpng either via a prebuilt library or by building it yourself from source

The Meson setup

The Meson build definition that provides all of the above is 25 lines long. We'll only look at the basic setup, but the whole file is available in this repo for those who want all the details. The most important bit is looking up dependencies, which looks like this:

external_deps = [dependency('threads'), dependency('gl')]
cpp = meson.get_compiler('cpp')

if host_machine.system() == 'windows'
  # no platform specific deps are needed
elif host_machine.system() == 'darwin'
  external_deps += [dependency('libpng'),
                    dependency('appleframeworks', modules: ['GLUT'])]
else
  external_deps += [dependency('libpng'),
                    dependency('x11'),
                    cpp.find_library('stdc++fs', required: false)]

The first two dependencies are the same on all platforms and use Meson's builtin functionality for enabling threading support and OpenGL. After that we add platform specific dependencies as described above. The stdc++fs one is only needed if you want to build on Debian stable or Raspberry Pi OS, as their C++ standard library is old and does not have the filesystem parts enabled by default. If you only support new OS versions, then that dependency can be removed.

The interesting bit here is libpng. As macOS does not provide it as part of the operating system, we need to build it ourselves. This can be accomplished easily by using Meson's WrapDB service for package management. Adding this dependency is a matter of going to your project's source root, ensuring that a directory called subprojects exists and running the following command:

meson wrap install libpng

This creates a wrap file that Meson can use to download and build the dependency as needed. That's all that is needed. Now anyone can fork the repo, edit the sample program source file and get going writing their own game.

Bonus Xcode support

Unlike the Ninja and Visual Studio backends, the Xcode backend has always been a bit ...  crappy, to be honest. Recently I have started work on bringing it up to feature parity with the other backends. There is still a lot of work to be done, but it is now good enough that you can build and debug applications on macOS. Here is an example of the Pixel engine sample application running under the debugger in Xcode.


This functionality will be available in the next release (no guarantees on feature completeness) but the impatient among you can try it out using Meson's Git trunk.

Wednesday, March 31, 2021

Never use environment variables for configuration

Suppose you need to create a function for adding two numbers together in plain C. How would you write it? What sort of an API would it have? One possible implementation would be this:

int add_numbers(int one, int two) {
    return one + two;
}

// to call it you'd do
int three = add_numbers(1, 2);

Seems reasonable? But what if it was implemented like this instead:

int first_argument;
int second_argument;

void add_numbers(void) {
    return first_argument + second_argument;
}

// to call it you'd do
first_argument = 1;
second_argument = 2;
int three = add_numbers();

This is, I trust you all agree, terrible. This approach is plain wrong, against all accepted coding practices and would get immediately rejected in any code review. It is left as an exercise to the reader to come up with ways in which this architecture is broken. You don't even need to look into thread safety to find correctness bugs.

And yet we have environment variables

Environment variables is exactly this: mutable global state. Envvars have some legitimate usages (such as enabling debug logging) but they should never, ever be used for configuring core functionality of programs. Sadly they are used for this purpose a lot and there are some people who think that this is a good thing. This causes no end of headaches due to weird corner, edge and even common cases.

Persistance of state

For example suppose you run a command line program that has some sort of a persistent state.

$ SOME_ENVVAR=... some_command <args>

Then some time after that you run it again:

$ some_command <args>

The environment is now different. What should the program do? Use the old configuration that had the env var set or the new one where it is not set? Error out? Try to silently merge the different options into one? Something else?

The answer is that you, the end user, can not now. Every program is free to do its own thing and most do. If you have ever spent ages wondering why the exact same commands work when run from one terminal but not the other, this is probably why.

Lack of higher order primitives

An environment variable can only contain a single null-terminated stream of bytes. This is very limiting. At the very least you'd want to have arrays, but it is not supported. Surely that is not a problem, you say, you can always do in-band signaling. For example the PATH environment variable has many directories which are separated by the : character. What could be simpler? Many things, it turns out.

First of all the separator for paths is not always :. On Windows it is ;. More generally every program is free to choose its own. A common choice is space:

CFLAGS='-Dfoo="bar" -Dbaz' <command>

Except what if you need to pass a space character as part of the argument? Depending on the actual program, shell and the phase of the moon, you might need to do this:

ARG='-Dfoo="bar bar" -Dbaz'

or this:

ARG='-Dfoo="bar\ bar" -Dbaz'

or even this:

ARG='-Dfoo="bar\\ bar" -Dbaz'

There is no way to know which one of these is the correct form. You have to try them all and see which one works. Sometimes, as an implementation detail, the string gets expanded multiple times so you get to quote quote characters. Insert your favourite picture of Xzibit here.

For comparison using JSON configuration files this entire class os problems would not exist. Every application would read the data in the same way, because JSON provides primitives to express these higher level constructs. In contrast every time an environment variable needs to carry more information than a single untyped string, the programmer gets to create a new ad hoc data marshaling scheme and if there's one thing that guarantees usability it's reinventing the square wheel.

There is a second, more insidious part to this. If a decision is made to configure something via an environment variable then the entire design goal changes. Instead of coming up with a syntax that is as good as possible for the given problem, instead the goal is to produce syntax that is easy to use when typing commands on the terminal. This reduces work in the immediate short term but increases it in the medium to long term.

Why are environment variables still used?

It's the same old trifecta of why things are bad and broken:

  1. Envvars are easy to add
  2. There are existing processes that only work via envvars
  3. "This is the way we have always done it so it must be correct!"
The first explains why even new programs add configuration options via envvars (no need to add code to the command line parser, so that's a net win right?).

The second makes it seem like envvars are a normal and reasonable thing as they are so widespread.

The third makes it all but impossible to improve things on a larger scale. Now, granted, fixing these issues would be a lot of work and the transition would unearth a lot of bugs but the end result would be more readable and reliable.

Monday, March 22, 2021

Writing a library and then using it as a Meson dependency directly from upstream Git

Meson has many ways of obtaining dependencies. The most common is pkg-config for prebuilt dependencies and the WrapDB for building upstream releases from source. A lesser known way is that you can get dependencies [1] directly from the upstream project's Git repository. Meson will transparently download and build them for you. There does not seem to be that many examples of this on the Internet, so let's see how one would both create and consume dependencies in this way.

The library

Rather than creating a throwaway library, let's instead make one that is actually useful. The C++ standard library does not have a full featured way to split strings, meaning that every project needs to write their own. To simplify the design, we're going to steal shamelessly from Python. That is, when in doubt, try to behave as closely as possible to how Python's string splitter functions work. In addition:

  • support any data storage (i.e. all input parameters should be string views)
  • have helper functionality for mmap, so even huge files can be split without massive overhead
  • return types can be either efficient (string views to the input) or safe (strings with copied data)

Once you start looking into the implementation it very quickly becomes clear why this functionality is not already in the standard library. It is quite tricky and there are many interesting things in Python's implementation that most people have never noticed. For example splitting a string via whitespace does this:

>>> ' hello world '.split()
['hello', 'world']

which is what you'd expect. But note that the only whitespace characters are spaces. So what happens if we optimise the code and explicitly split only by space?

>>> ' hello world '.split(' ')
['', 'hello', 'world', '']

That's ... unexpected. It turns out that if you split by whitespace, Python silently removes empty substrings. You can't make it not do that if you specify your own split criterion. This seems like a thing a general solution should provide.

Another common idiom in Python is to iterate over lines in a file with this:

for line in open('infile.txt'):
    ...

This seems like a thing that could be implemented by splitting the file contents with newline characters. That works for files whose path separator is \n but fails with DOS line endings of \ŗ\n. Usually in string splitting the order of the input characters does not matter, but in this case it does. \r\n is a single logical newline, whereas \n\r is two [2]. Further, in Python the returned strings contain the line ending characters converted to \n, but in this is not something we can do. Opening a DOS file should return a string view to the original immutable data but the \r character should be a \n instead. This could only be done by returning a modified copy rather than a view to the original data. This necessitates a behavioural difference to Python so that the linefeed characters are omitted.

This is the kind of problem that would be naturally implemented with coroutines. Unfortunately those are c++20 only, so very few people could use it and there is not that much info online on how to write your own generators. So vectors of string_views it is for now.

The implementation

The code for the library is available in this Github repo. For the purposes of this blog post, the interesting line is this one specifying the dependency information:

psplit_dep = declare_dependency(include_directories: '.')

This is the standard way subprojects set themselves up to be used. As this is a header only library, the dependency only has an include directory.

Using it

A separate project that uses the dependency to implement the world's most bare bones CSV parser can be obtained here. The actual magic happens in the file subprojects/psplit.wrap, which looks like this:

[wrap-git]
directory = psplit
url = https://github.com/jpakkane/psplit.git
revision = head

[provide]
psplit = psplit_dep

The first section describes where the dependency can be downloaded and where it should be placed. The second section specifies that this repository provides one dependency named psplit and that its dependency information can be found in the subproject in a variable named psplit_dep.

Using it is simple:

psplit_dep = dependency('psplit')
executable('csvsplit', 'csvsplit.cpp',
    dependencies: psplit_dep)

When the main project requests the psplit dependency, Meson will try to find it, notices that a subproject does provide it and will then download, configure and build the dependency automatically.

Language support

Even though we used C++ here, this works for any language supported by Meson. It even works for mixed language projects, so you can for example have a library in plain C and create bindings to it in a different language.

[1] As long as they build with Meson.

[2] Unless you are using a BBC Micro, though I suspect you won't have a C++17 compiler at your disposal in that case.

Friday, March 19, 2021

Microsoft is shipping a product built with Meson

Some time ago Microsoft announced a compatibility pack to get OpenGL and OpenCL running even on computers whose hardware does not provide native OpenGL drivers. It is basically OpenGL-over-Direct3D. Or that is at least my understanding of it, hopefully this description is sufficiently accurate to not cause audible groans on the devs who actually know what it is doing under the covers. More actual details can be found in this blog post.

An OpenGL implementation is a whole lot of work and writing one from scratch is a multi-year project. Instead of doing that, Microsoft chose the sensible approach of taking the Mesa implementation and porting it to work on Windows. Typically large corporations do this by the vendoring approach, that is, copying the source code inside their own repos, rewriting the build system and treating it as if it was their own code.

The blog post does not say it, but in this case that approach was not taken. Instead all work was done in upstream Mesa and the end products are built with the same Meson build files [1]. This also goes for the final release that is available in Windows Store. This is a fairly big milestone for the Meson project as it is now provably mature enough that major players like Microsoft are willing to use it to build and ship end user products. 

[1] There may, of course, be some internal patches we don't know about.

Wednesday, March 10, 2021

Mixing Rust into an existing C shared library using Meson

Many people are interested in adding Rust to their existing projects for additional safety. For example it would be convenient to use Rust for individual high-risk things like string parsing while leaving the other bits as they are. For shared libraries you'd need to be able to do this while preserving the external plain C API and ABI. Most Rust compilation is done with Cargo, but it is not particularly suited to this task due to two things.

  1. Integrating Cargo into an existing project's build system is painful, because Cargo wants to dominate the entire build process. It does not cooperate with these kind of build setups particularly well.
  2. Using any Cargo dependency brings in tens or hundreds of dependency crates including five different command line parsers, test frameworks and other deps that you don't care about and don't need but which take forever to compile.
It should be noted that the latter is not strictly Cargo's fault. It is possible to use it standalone without external deps. However what seems to happen in practice is all Cargo projects experience a dependency explosion sooner or later. Thus it would seem like there should be a less invasive way to merge Rust into an existing code base. Fortunately with Meson, there is.

The sample project

To see how this can be done, we created a simple standalone C project for adding numbers. The full source code can be found in this repository. The library consists of three functions:

adder* adder_create(int number);
int adder_add(adder *a, int number);
void adder_destroy(adder*);

To add the numbers 2 and 4 together, you'd do this:

adder *two_adder = adder_create(2);
int six = adder_add(two_adder, 4);
adder_destroy(two_adder);

As adding numbers is highly dangerous, we want to implement the adder_add function in Rust and leave the other functions untouched. The implementation in all its simplicity is the following:

#[repr(C)]
pub struct Adder {
  pub number: i32
}

#[no_mangle]
pub extern fn adder_add(a: &Adder, number: i32) -> i32 {
    return a.number + number;
}

The build setup

Meson has native support for building Rust. It does not require Cargo or any other tool, it invokes rustc directly. In this particular case we need to build the Rust code as a staticlib.

rl = static_library('radder', 'adder.rs',
                    rust_crate_type: 'staticlib')

In theory all you'd need to do, then, is to link this library into the main shared library, remove the adder_add implementation from the C side and you'd be done. Unfortunately it's not that simple. Because nothing in the existing code calls this function, the linker will look at it, see that it is unused and throw it away.

The common approach in these cases is to use link_whole instead of plain linking. This does not work, because rustc adds its own metadata files inside the static library. The system linker does not know how to handle those and will exit with an error. Fortunately there is a way to make this work. You can specify additional undefined symbol names to the linker. This makes it behave as if something in the existing code had called adder_add, and grabs the implementation from the static library. This can be done with an additional kwarg to the shared_library call.

link_args: '-Wl,-u,adder_add'

With this the goal has been reached: one function implementation is done with Rust while preserving both the API and the ABI and the test suite passes as well. The resulting shared library file is about 1 kilobyte bigger than the plain C one (though if you build without optimizations enabled, it is a whopping 14 megabytes bigger).

Wednesday, February 24, 2021

Millennium prize problems but for Linux

There is a longstanding tradition in mathematics to create a list of hard unsolved problems to drive people to work on solving them. Examples include Hilbert's problems and the Millennium Prize problems. Wouldn't it be nice if we had the same for Linux? A bunch of hard problems with sexy names that would drive development forward? Sadly there is no easy source for tens of millions of euros in prize money, not to mention it would be very hard to distribute as this work would, by necessity, be spread over a large group of people.

Thus it seems is unlikely for this to work in practice, but that does not prevent us from stealing a different trick from mathematicians' toolbox and ponder how it would work in theory. In this case the list of problems will probably never exist, but let's assume that it does. What would it contain if it did exist? Here's one example I came up with. it is left as an exercise to the reader to work out what prompted me to write this post.

The Memory Depletion Smoothness Property

When running the following sequence of steps:
  1. Check out the full source code for LLVM + Clang
  2. Configure it to compile Clang and Clang-tools-extra, use the Ninja backend and RelWithDebInfo build type, leave all other settings to their default values
  3. Start watching a video file with VLC or a browser
  4. Start compilation by running nice -19 ninja
The outcome must be that the video playback works without skipping a single frame or audio sample.

What happens currently?

When Clang starts linking, each linker process takes up to 10 gigabytes of ram. This leads to memory exhaustion, flushing active memory to swap and eventually crashing the linker processes. Before that happens, however, every other app freezes completely and the entire desktop remains frozen until things get swapped back in to memory, which can take tens of seconds. Interestingly all browser tabs are killed before linker processes start failing. This happens both with Firefox and Chromium.

What should happen instead?

The system handles the problematic case in a better way. The linker processes will still die as there is not enough memory to run them all but the UI should never noticeably freeze. For extra points the same should happen even if you run Ninja without nice.

The wrong solution

A knee-jerk reaction many people have is something along the lines of "you can solve this by limiting the number of linker processes by doing X". That is not the answer. It solves the symptoms but not the underlying cause, which is that bad input causes the scheduler to do the wrong thing. There are many other ways of triggering the same issue, for example by copying large files around. A proper solution would fix all of those in one go.

Saturday, February 6, 2021

Why most programming language performance comparisons are most likely wrong

For as long as programming languages have existed, people have fought over which one of them is the fastest. These debates have ranged from serious scientific research to many a heated late night bar discussion. Rather than getting into this argument, let's look at the problem at a higher level, namely how would you compare the performance of two different programming languages. The only really meaningful approach is to do it empirically, that is, implementing a bunch of test programs in both programming languages, benchmarking them and then declaring the winner.

This is hard. Really hard. Insanely hard in some cases and very laborious in any case. Even though the problem seems straightforward, there are a ton of error sources that can trip up the unaware (and even many very-much-aware) performance tester.

Equivalent implementations?

In order to make the two implementations comparable they should be "of equal quality". That is, they should have been implemented by people with roughly the same amount of domain knowledge as well as programming skills in their chosen language. This is difficult to organise. If the implementations are written by different people, they may approach the problem with different algorithms making the relative performance not a question of programming languages per se, but of the programming approaches chosen by each programmer.

Even if both implementation are written by the same person using the same algorithm, there are still problems. Typically people are better at some programming languages than others. Thus they tend to provide faster implementations in their favourite language. This causes bias, because the performance is not a measure of the programming languages themselves, but rather the individual programmer. These sorts of tests can be useful in finding usability and productivity differences, but not so much for performance.

Thus you might want to evaluate existing programs written by many expert programmers. This is a good approach, but sometimes even seasoned researches get it wrong. There is a paper that tries to compare different programming languages for performance and power efficiency using this approach. In their test results one particular program's C implementation was 30% faster than the same program written in C++. This single measurement throws a big shade over the entire paper. If we took the original C source, changed all the sources' file extension from .c to .cpp and recompiled, the end result should have the same performance within a few percentage points. Thus we have to conclude that one of the following is happening (in decreasing order of probability):
  1. The C++ version is suboptimally coded.
  2. The testing methodology has a noticeable flaw.
  3. The compiler used has a major performance regression for C++ as opposed to C.
Or, in other words, the performance difference comes somewhere else than the choice of programming language.

The difficulty of measurement

A big question is how does one actually measure the performance of any given program. A common approach is to run the test multiple times in a row and then do something like the following:
  • Handle outliers by dropping the points at extreme ends (that is, the slowest and fastest measurements)
  • Calculate the mean and/or median for the remaining data points
  • Compare the result between different programs, the one with the fastest time wins
Those who remember their high school statistics lessons might calculate standard deviation as well. This approach seems sound and rigorous, but it contains several sources of systematic error. The first of these is quite surprising and has to do with noise in measurements.

Most basic statistical tools assume that the error is normally distributed with an average value of zero. If you are measuring something like temperature or speed this is a reasonable assumption. For this case it is not. A program's measured time consists of the "true" time spent solving the problem and overhead that comes from things like OS interruptions, disk accesses and so on. If we assume that the noise is gaussian with a zero average then what it means is that the physical machine has random processes that make the program run faster than it would if the machine was completely noise free. This is, of course, impossible. The noise is strongly non-gaussian simply because it can never have a negative value.

In fact, the measurement that is the closest to the platonic ideal answer is the fastest one. It has the least amount of noise interference from the system. That is the very same measurement that was discarded in the first step when outliers were cleaned out. Sometimes doing established and reasonable things makes things worse.

Statistics even harder

Putting that aside, let's assume we have measurements for the two programs, which do look "sufficiently gaussian". Numerical analysis shows that language #1 takes 10 seconds to run whereas language #2 takes 9 seconds. A 10% difference is notable and thus we can conclude that language #2 is faster. Right?

Well, no. Suppose the actual measurement data look like this:


Is the one on the right faster or not? Maybe? Probably? Could be? Answering this question properly requires going all the way to university level statistics. First one formulates a null hypothesis, that is, that the two programs have no performance difference. Then one calculates the probability that both of these measurements have come from the same probability distribution. If the probability for this is small (typically 5%), then the null hypothesis is rejected and we have proven that one program is indeed faster than the other. This method is known as Student's t-test. and it is used commonly in heavy duty statistics. Note that some implementations of the test assume gaussian data and if you data has some other shape, the results you get might not be reliable.

This works for one program, but a rigorous test has many different programs. There are statistical methods for evaluating those, but they get even more complicated. Looking up how they work is left as an exercise to the reader.

All computers' alignment is Chaotic Neutral

Statistics are hard, but fortunately computers are simple because they are deterministic, reliable and logical. For example if you have a program and you modify it by adding a single NOP instruction somewhere in the stream, the biggest impact it could possibly have is one extra instruction cycle, which is so vanishingly small as to be unmeasurable. If you do go out and measure it, though, the results might confuse and befuddle you. Not only can this one minor change make the program 10% slower (or possibly even more), it can even make it 10% faster. Yes. Doing pointless extra work can make your the program faster.

If this is the first time you encounter this issue you probably won't believe it. Some fraction might already have gone to Twitter to post their opinion about this "stupid and wrong" article written by someone who is "clearly incompetent and an idiot". That's ok, I forgive you. Human nature and all that. You'll grow out of it eventually. The phenomenon is actually real and can be verified with measurements. How is it possible that having the CPU do extra work could make the program faster?

The answer is that it doesn't. The actual instruction is irrelevant. What actually matters is code alignment. As code shifts around in memory, its performance characteristics change. If a hot loop gets split by a cache boundary it slows down. Unsplitting it speeds it up. The NOP does not need to be inside the loop for this to happen, simply moving the entire code block up or down can cause this difference. Suppose you measure two programs in the most rigorous statistical way possible. If the performance delta between the two is under something like 10%, you can not reasonably say one is faster than the other unless you use a measurement method that eliminates alignment effects.

It's about the machine, not the language

As programs get faster and faster optimisation undergoes an interesting phase transition. Once performance gets to a certain level the system no longer about what the compiler and CPU can do to run the developer's program as fast as possible. Instead it becomes about how the programmer can utilize the CPU's functionality as efficiently as possible. These include things like arranging your data into a layout that the processor can crunch with minimal effort and so on. In effect this means replacing language based primitives with hardware based primitives. In some circles optimization works weirdly backwards in that the programmer knows exactly what SIMD instructions they want a given loop to be optimized into and then fiddles around with the code until it does. At this point the functionality of the programming language itself is immaterial.

This is the main reason why languages like C and Fortran are still at the top of many performance benchmarks, but the techniques are not limited to those languages. Years ago I worked on a fairly large Java application that had been very thoroughly optimized. Its internals consisted of integer arrays. There were no classes or even Integer objects in the fast path, it was basically a recreation of C inside Java. It could have been implemented in pretty much any programming language. The performance differences between them would have mostly come down to each compiler's optimizer. They produce wildly different performance outcomes even when using the same programming language, let alone different ones. Once this happens it's not really reasonable to claim that any one programming language is clearly faster than others since they all get reduced to glorified inline assemblers.

References

Most of the points discussed here has been scraped from other sources and presentations, which the interested reader is encouraged to look up. These include the many optimization talks by Andrei Alexandrescu as well as the highly informational CppCon keynote by Emery Berger. Despite its venue the latter is fully programming language agnostic so you can watch it even if you are the sort of person who breaks out in hives whenever C++ is mentioned.