Wednesday, May 5, 2021

Is the space increase caused by static linking a problem?

Most recent programming languages want to link all of their dependencies statically rather than using shared libraries. This has many implications, but for now we'll only focus on one: executable size. It is generally accepted that executables created in this way are bigger than when static linking. The question is how much and whether it even mattesr. Proponents of static linking say the increase is irrelevant given current computers and gigabit networks. Opponents are of the, well, opposite opinion. Unfortunately there is very little real world measurements around for this.

Instead of arguing about hypotheticals, let's try to find some actual facts. Can we find a case where, within the last year or so, a major proponent of static linking has voluntarily switched to shared linking due to issues such as bandwidth savings. If such a case can be found, then it would indicate that, yes, the binary size increase caused by static linking is a real issue.

Android WebView, Chrome and the Trichrome library

Last year (?)  Android changed the way they provide both the Chrome browser and the System WebView app [1]. Originally both of them were fully isolated, but after the change both of them had a dependency on a new library called Trichrome, which is basically just a single shared library. According to news sites, the reasoning was this:

"Chrome is no longer used as a WebView implementation in Q+. We've moved to a new model for sharing common code between Chrome and WebView (called "Trichrome") which gives the same benefits of reduced download and install size while having fewer weird special cases and bugs."

Google has, for a long time, favored static linking. Yet, in this case, they have chosen to switch from static linking to shared linking on their flagship application on their flagship operating system. More importantly their reasons seem to be purely technical. This would indicate that shared linking does provide real world benefits compared to static linking.

[1] I originally became aware of this issue since this change broke updates on both of these apps and I had to fix it manually with apkmirror.

Tuesday, May 4, 2021

"Should we break the ABI" is the wrong question

The ongoing battle on breaking C++'s ABI seems to be gathering steam again. In a nutshell there are two sets of people in this debate. The first set wants to break ABI to gain performance and get rid of bugs, whereas the second set of people want to preserve the ABI to keep old programs working. Both sides have dug their heels in the ground and refuse to budge.

However debating whether the ABI should be broken or not is not really the issue. A more productive question would be "if we do the break, how do we keep both the old and new systems working at the same time during a transition period". That is the real issue. If you can create a good solution to this problem, then the original problem goes away because both sides get what they want. In other words, what you want to achieve is to be able to run a command like this:

prog_using_old_abi | prog_using_new_abi

and have it work transparently and reliably. It turns out that this is already possible. In fact many (most?) people are reading this blog post on a computer that already does exactly that.

Supporting multiple ABIs at the same time

On Debian-based systems this is called multi-arch support. It allows you to, for example, run 32 and 64 bit apps transparently on the same machine at the same time. IIRC it was even possible to upgrade a 32 bit OS install to 64 bits by first enabling the 64 bit arch and then disabling the 32 bit one. The basic gist of multiarch is that rather than installing libraries to /usr/lib, they get installed to something like /usr/lib/x86_64. The kernel and userspace tools know how to handle these two different binary types based on the metadata in ELF executables.

Given this we could define an entirely new processor type, let's call it x86_65, and add that as a new architecture. Since there is no existing code we can do arbitrary changes to the ABI and nothing breaks. Once that is done we can create the full toolchain, rebuild all OS packages with the new toolchain and install them. The old libraries remain and can be used to run all the old programs that could not be recompiled (for whatever reason). 

Eventually the old version can be thrown away. Things like old Steam games could still be run via something like Flatpak. Major corporations that have old programs they don't want to touch are the outstanding problem case. This just means that Red Hat and Suse can sell them extra-long term support for the old ABI userland + toolchain for an extra expensive price. This way those organizations who don't want to get with the times are the ones who end up paying for the stability and in return distro vendors get more money. This is good.

Simpler ABI tagging

Defining a whole virtual CPU just for this seems a bit heavy handed and would probably encounter a lot of resistance. It would be a lot smoother if there were a simpler way to version this ABI change. It turns out that there is. If you read the ELF specification, you will find that it has two fields for ABI versioning (and Wikipedia claims that the first one is usually left at zero). Using these fields the multiarch specification could be expanded to be a "multi-abi" spec. It would require a fair bit of work in various system components like the linker, RPM, Apt and the like to ensure the two different ABIs are never loaded in the same process. As an bonus you could do ABI breaking changes to libc at the same time (such as redefining intmax_t) There does not seem to be any immediate showstoppers, though, and the basic feasibility has already been proven by multiarch.

Obligatory Rust comparison

Rust does not have a stable ABI, in fact it is the exact opposite. Any compiler version is allowed to break the ABI in any way it wants to. This has been a constant source of friction for a long time and many people have tried to make Rust commit to some sort of a stable ABI. No-one has been successful. It is unlikely they will commit to distro-level ABI stability in the foreseeable future, possibly ever.

However if we could agree to steady and continuous ABI transitions like these every few years then that is something that they might agree to. If this happens then the end result would be beneficial to everyone involved. Paradoxically it would also mean that by having a well established incremental way to break ABI would lead to more ABI stability overall.

Sunday, April 25, 2021

The joys of creating Xcode project files

Meson's Xcode backend was originally written in 2014 or so and was never complete or even sufficiently good for daily use. The main reason for this was that at the time I got extremely frustrated with Xcode's file format and had to stop because continuing with would have lead to a burnout. It's just that unpleasant. In fact, when I was working on it originally I spontaneously said the following sentence out loud:

You know, I really wish this file was XML instead.

I did not think these words could be spoken with a straight face. But they were.

The Xcode project file is not really even a "build file format" in the sense that it would be a high level description of the build setup that a human could read, understand and modify. Instead it seems that Xcode has an internal enterprise-quality object model and the project file is just this data structure serialised out to disk in a sort-of-like-json-designed-in-1990 syntax. This format is called a plist or a property list. Apparently there is even an XML version of the format, but fortunately I've never seen one of those. Plists are at least plain text so you can read and write them, but sufficiently like a binary that you can't meaningfully diff them which must make revision control conflict resolution a joy.

The semantics of Xcode project files are not documented. The only way to really work with them is to define simple projects either with Xcode itself or with CMake, read the generated project file and try to reverse-engineer its contents from those. If you get it wrong Xcode prints a useless error message. The best you can hope for is that it prints the line number where the error occurred. Often it does not.

The essentials

An Xcode project file is essentially a single object dictionary inside a wrapper dictionary. The keys are unique IDs and the values are dictionaries, meaning the whole thing is a string–variant wrapper dictionary containing a string–variant dictionary of string–variant dictionaries. There is no static typing, each object contains an isa key which has a string telling what kind of an object it is. Everything in Xcode is defined by building these objects and then by referring to other objects via their unique ids (except when it isn't, more about this later).

Since everything has a unique ID, a reasonable expectation would be that a) it would be unique per object and b) you would use that ID to refer to the target. Neither of these are true. For example let's define a single build target, which in Xcode/CMake parlance is a PBXNativeTarget. Since plist has a native array type, the target's sources could be listed in an array of unique ids of the files. Instead the array contains a reference to a PBXSourcesBuildPhase object that has a few keys which are useless and always the same and an array of unique ids. Those do not point to file objects, as you might expect, but to a PBXBuildFile object, whose contents look like this:

24AA497CCE8B491FB93D4C76 = {
  isa = PBXBuildFile;
  fileRef = 58CFC111B9B64310B946BCE7 /* /path/to/file */;
};

There is no other data in this object. Its only reason for existing, as far as I can tell, is to point to the actual PBXFileReference object which contains the filesystem path. Thus each file actually gets two unique ids which can't be used interchangeably. But why stop at two?

In order to make the file appear in Xcode's GUI it needs more unique ids. One for the tree widget and another one it points to. Even this is not enough, though, because if the file is used in two different targets, you can not reuse the same unique ids (you can in the build definition, but not in the GUI definition just to make things more confusing). The end result being that if you have one source file that is used in two different build targets, then it gets at least four different unique id numbers.

Quoting

In some contexts Xcode does not use uids but filenames. In addition to build targets Xcode also provides PBXAggregateTargets, which are used to run custom build steps like code generation. The steps are defined in a PBXShellScriptBuildPhase object whose output array definition looks like this:

outputPaths = (
    /path/to/output/file,
);

Yep, those are good old filesystem paths. Even better, they are defined as an actual honest to $deity array rather than a space separated string. This is awesome! Surely it means that Xcode will properly quote all these file names when it invokes external commands.

Lol, no!

If your file names have special characters in them (like, say, all of Meson's test suite does by design) then you get to quote them manually. Simply adding double quotes is not enough, since they are swallowed by the plist parser. Instead you need to add additional quoted quote characters like this: "\"foo bar\"".  Seems simple, but what if you need to pass a backslash through the system, like if you want to define some string as "foo\bar"? The common wisdom is "don't do that" but this is a luxury we don't have, because people will expect it to work, do it anyway and report bugs when it fails.

To cut a long and frustrating debugging session short, the solution is that you need to replace every backslash with eight backslashes and then it will work. This implies that the string is interpreted by a shell-like thing three times. I could decipher where two of them occur but the third one remains a mystery. Any other number of backslashes does not work and only leads to incomprehensible error messages.

Quadratic slowdowns for great enjoyment

Fast iterations are one of the main ingredients of an enjoyable development experience. Unfortunately Xcode does not provide that for this use case. It is actually agonizingly slow. The basic Meson test suite consists of around 240 sample projects. When using the Ninja backend it takes 10 minutes to configure, build and run the tests for all of them. The Xcode backend takes 24 minutes to do the same. Looking at the console when xcodebuild starts it first reads its input files, then prints "planning build", pauses for a while and then starts working. This freeze seems to take longer than Meson took configuring and generating the project file. Xcodebuild lags even for projects that have only one build target and one source file. It makes you wonder what it is doing and how it is even possible to spend almost a full second planning the build of one file. It also makes you wonder how a pure Python program written by random people in their spare time outperforms the flagship development application created by the richest corporation in the world.

Granted, this is a very uncommon case as very few people need to regenerate their project files all the time. Still, this slowness makes developing an Xcode backend a tedious job. Every time you add functionality to fix one test case, you have to rerun the entire test suite. This means that the further along you get, the slower development becomes. By the end I was spending more time on Youtube waiting for tests to finish than I did writing code.

Final wild speculation

Xcode has a version compatibility selection box that looks like this:

This is extremely un-Apple-like. Backwards compatibility is not a thing Apple has ever really done. Typically you get support for the current version and, if you are really lucky, the previous one. Yet Xcode has support for versions going all the way back to 2008. In fact, this might be the product with the longest backwards compatibility story ever provided by Apple. Why is this?

We don't really know. However being the maintainer of a build system means that sometimes people tell you things. Those things may be false or fabricated, of course, so the following is just speculation. I certainly have no proof for any of it. Anyhow it seems that a lot the fundamental code that is used to build macOS exists only in Xcode projects created ages ago. The authors of said projects have since left the company and nobody touches those projects for fear of breaking things. If true this would indicate that the Xcode development team has to keep those old versions working. No matter what.

Sunday, April 11, 2021

Converting a project to Meson: the olc Pixel Game Engine

Meson's development has always been fairly practical focusing on solving real world problems people have. One simple way of doing that is taking existing projects, converting their build systems to Meson and seeing how long it takes, what pain points there are and whether there are missing features. Let's look at one such conversion.

We'll use the One Lone Coder Pixel Game Engine. It is a simple but fairly well featured game engine that is especially suitable for beginners. It also has very informative YouTube channel teaching people how to write their own computer games. The engine is implemented as a single C++ header and the idea is that you can just copy it in your own project, #include it and start coding your game. Even though the basic idea is simple, there are still some build challenges:

  • On Windows there are no dependencies, it uses only builtin OS functionality but you need to set up a Visual Studio project (as that is what most beginners tend to use)
  • On Linux you need to use system dependencies for the X server, OpenGL and libpng
  • On macOS you need to use both OpenGL and GLUT for the GUI and in addition obtain libpng either via a prebuilt library or by building it yourself from source

The Meson setup

The Meson build definition that provides all of the above is 25 lines long. We'll only look at the basic setup, but the whole file is available in this repo for those who want all the details. The most important bit is looking up dependencies, which looks like this:

external_deps = [dependency('threads'), dependency('gl')]
cpp = meson.get_compiler('cpp')

if host_machine.system() == 'windows'
  # no platform specific deps are needed
elif host_machine.system() == 'darwin'
  external_deps += [dependency('libpng'),
                    dependency('appleframeworks', modules: ['GLUT'])]
else
  external_deps += [dependency('libpng'),
                    dependency('x11'),
                    cpp.find_library('stdc++fs', required: false)]

The first two dependencies are the same on all platforms and use Meson's builtin functionality for enabling threading support and OpenGL. After that we add platform specific dependencies as described above. The stdc++fs one is only needed if you want to build on Debian stable or Raspberry Pi OS, as their C++ standard library is old and does not have the filesystem parts enabled by default. If you only support new OS versions, then that dependency can be removed.

The interesting bit here is libpng. As macOS does not provide it as part of the operating system, we need to build it ourselves. This can be accomplished easily by using Meson's WrapDB service for package management. Adding this dependency is a matter of going to your project's source root, ensuring that a directory called subprojects exists and running the following command:

meson wrap install libpng

This creates a wrap file that Meson can use to download and build the dependency as needed. That's all that is needed. Now anyone can fork the repo, edit the sample program source file and get going writing their own game.

Bonus Xcode support

Unlike the Ninja and Visual Studio backends, the Xcode backend has always been a bit ...  crappy, to be honest. Recently I have started work on bringing it up to feature parity with the other backends. There is still a lot of work to be done, but it is now good enough that you can build and debug applications on macOS. Here is an example of the Pixel engine sample application running under the debugger in Xcode.


This functionality will be available in the next release (no guarantees on feature completeness) but the impatient among you can try it out using Meson's Git trunk.

Wednesday, March 31, 2021

Never use environment variables for configuration

Suppose you need to create a function for adding two numbers together in plain C. How would you write it? What sort of an API would it have? One possible implementation would be this:

int add_numbers(int one, int two) {
    return one + two;
}

// to call it you'd do
int three = add_numbers(1, 2);

Seems reasonable? But what if it was implemented like this instead:

int first_argument;
int second_argument;

void add_numbers(void) {
    return first_argument + second_argument;
}

// to call it you'd do
first_argument = 1;
second_argument = 2;
int three = add_numbers();

This is, I trust you all agree, terrible. This approach is plain wrong, against all accepted coding practices and would get immediately rejected in any code review. It is left as an exercise to the reader to come up with ways in which this architecture is broken. You don't even need to look into thread safety to find correctness bugs.

And yet we have environment variables

Environment variables is exactly this: mutable global state. Envvars have some legitimate usages (such as enabling debug logging) but they should never, ever be used for configuring core functionality of programs. Sadly they are used for this purpose a lot and there are some people who think that this is a good thing. This causes no end of headaches due to weird corner, edge and even common cases.

Persistance of state

For example suppose you run a command line program that has some sort of a persistent state.

$ SOME_ENVVAR=... some_command <args>

Then some time after that you run it again:

$ some_command <args>

The environment is now different. What should the program do? Use the old configuration that had the env var set or the new one where it is not set? Error out? Try to silently merge the different options into one? Something else?

The answer is that you, the end user, can not now. Every program is free to do its own thing and most do. If you have ever spent ages wondering why the exact same commands work when run from one terminal but not the other, this is probably why.

Lack of higher order primitives

An environment variable can only contain a single null-terminated stream of bytes. This is very limiting. At the very least you'd want to have arrays, but it is not supported. Surely that is not a problem, you say, you can always do in-band signaling. For example the PATH environment variable has many directories which are separated by the : character. What could be simpler? Many things, it turns out.

First of all the separator for paths is not always :. On Windows it is ;. More generally every program is free to choose its own. A common choice is space:

CFLAGS='-Dfoo="bar" -Dbaz' <command>

Except what if you need to pass a space character as part of the argument? Depending on the actual program, shell and the phase of the moon, you might need to do this:

ARG='-Dfoo="bar bar" -Dbaz'

or this:

ARG='-Dfoo="bar\ bar" -Dbaz'

or even this:

ARG='-Dfoo="bar\\ bar" -Dbaz'

There is no way to know which one of these is the correct form. You have to try them all and see which one works. Sometimes, as an implementation detail, the string gets expanded multiple times so you get to quote quote characters. Insert your favourite picture of Xzibit here.

For comparison using JSON configuration files this entire class os problems would not exist. Every application would read the data in the same way, because JSON provides primitives to express these higher level constructs. In contrast every time an environment variable needs to carry more information than a single untyped string, the programmer gets to create a new ad hoc data marshaling scheme and if there's one thing that guarantees usability it's reinventing the square wheel.

There is a second, more insidious part to this. If a decision is made to configure something via an environment variable then the entire design goal changes. Instead of coming up with a syntax that is as good as possible for the given problem, instead the goal is to produce syntax that is easy to use when typing commands on the terminal. This reduces work in the immediate short term but increases it in the medium to long term.

Why are environment variables still used?

It's the same old trifecta of why things are bad and broken:

  1. Envvars are easy to add
  2. There are existing processes that only work via envvars
  3. "This is the way we have always done it so it must be correct!"
The first explains why even new programs add configuration options via envvars (no need to add code to the command line parser, so that's a net win right?).

The second makes it seem like envvars are a normal and reasonable thing as they are so widespread.

The third makes it all but impossible to improve things on a larger scale. Now, granted, fixing these issues would be a lot of work and the transition would unearth a lot of bugs but the end result would be more readable and reliable.

Monday, March 22, 2021

Writing a library and then using it as a Meson dependency directly from upstream Git

Meson has many ways of obtaining dependencies. The most common is pkg-config for prebuilt dependencies and the WrapDB for building upstream releases from source. A lesser known way is that you can get dependencies [1] directly from the upstream project's Git repository. Meson will transparently download and build them for you. There does not seem to be that many examples of this on the Internet, so let's see how one would both create and consume dependencies in this way.

The library

Rather than creating a throwaway library, let's instead make one that is actually useful. The C++ standard library does not have a full featured way to split strings, meaning that every project needs to write their own. To simplify the design, we're going to steal shamelessly from Python. That is, when in doubt, try to behave as closely as possible to how Python's string splitter functions work. In addition:

  • support any data storage (i.e. all input parameters should be string views)
  • have helper functionality for mmap, so even huge files can be split without massive overhead
  • return types can be either efficient (string views to the input) or safe (strings with copied data)

Once you start looking into the implementation it very quickly becomes clear why this functionality is not already in the standard library. It is quite tricky and there are many interesting things in Python's implementation that most people have never noticed. For example splitting a string via whitespace does this:

>>> ' hello world '.split()
['hello', 'world']

which is what you'd expect. But note that the only whitespace characters are spaces. So what happens if we optimise the code and explicitly split only by space?

>>> ' hello world '.split(' ')
['', 'hello', 'world', '']

That's ... unexpected. It turns out that if you split by whitespace, Python silently removes empty substrings. You can't make it not do that if you specify your own split criterion. This seems like a thing a general solution should provide.

Another common idiom in Python is to iterate over lines in a file with this:

for line in open('infile.txt'):
    ...

This seems like a thing that could be implemented by splitting the file contents with newline characters. That works for files whose path separator is \n but fails with DOS line endings of \ŗ\n. Usually in string splitting the order of the input characters does not matter, but in this case it does. \r\n is a single logical newline, whereas \n\r is two [2]. Further, in Python the returned strings contain the line ending characters converted to \n, but in this is not something we can do. Opening a DOS file should return a string view to the original immutable data but the \r character should be a \n instead. This could only be done by returning a modified copy rather than a view to the original data. This necessitates a behavioural difference to Python so that the linefeed characters are omitted.

This is the kind of problem that would be naturally implemented with coroutines. Unfortunately those are c++20 only, so very few people could use it and there is not that much info online on how to write your own generators. So vectors of string_views it is for now.

The implementation

The code for the library is available in this Github repo. For the purposes of this blog post, the interesting line is this one specifying the dependency information:

psplit_dep = declare_dependency(include_directories: '.')

This is the standard way subprojects set themselves up to be used. As this is a header only library, the dependency only has an include directory.

Using it

A separate project that uses the dependency to implement the world's most bare bones CSV parser can be obtained here. The actual magic happens in the file subprojects/psplit.wrap, which looks like this:

[wrap-git]
directory = psplit
url = https://github.com/jpakkane/psplit.git
revision = head

[provide]
psplit = psplit_dep

The first section describes where the dependency can be downloaded and where it should be placed. The second section specifies that this repository provides one dependency named psplit and that its dependency information can be found in the subproject in a variable named psplit_dep.

Using it is simple:

psplit_dep = dependency('psplit')
executable('csvsplit', 'csvsplit.cpp',
    dependencies: psplit_dep)

When the main project requests the psplit dependency, Meson will try to find it, notices that a subproject does provide it and will then download, configure and build the dependency automatically.

Language support

Even though we used C++ here, this works for any language supported by Meson. It even works for mixed language projects, so you can for example have a library in plain C and create bindings to it in a different language.

[1] As long as they build with Meson.

[2] Unless you are using a BBC Micro, though I suspect you won't have a C++17 compiler at your disposal in that case.

Friday, March 19, 2021

Microsoft is shipping a product built with Meson

Some time ago Microsoft announced a compatibility pack to get OpenGL and OpenCL running even on computers whose hardware does not provide native OpenGL drivers. It is basically OpenGL-over-Direct3D. Or that is at least my understanding of it, hopefully this description is sufficiently accurate to not cause audible groans on the devs who actually know what it is doing under the covers. More actual details can be found in this blog post.

An OpenGL implementation is a whole lot of work and writing one from scratch is a multi-year project. Instead of doing that, Microsoft chose the sensible approach of taking the Mesa implementation and porting it to work on Windows. Typically large corporations do this by the vendoring approach, that is, copying the source code inside their own repos, rewriting the build system and treating it as if it was their own code.

The blog post does not say it, but in this case that approach was not taken. Instead all work was done in upstream Mesa and the end products are built with the same Meson build files [1]. This also goes for the final release that is available in Windows Store. This is a fairly big milestone for the Meson project as it is now provably mature enough that major players like Microsoft are willing to use it to build and ship end user products. 

[1] There may, of course, be some internal patches we don't know about.