maanantai 20. tammikuuta 2020

The Meson Manual is now available for purchase

Some of you might remember that last year I ran a crowdfunding campaign to create a full written user manual for Meson. That failed fairly spectacularly, mostly due to the difficulty of getting any sort of visibility for these kinds of projects (i.e. on the Internet, everything drowns).

Not taking the hint I chose to write and publish it on my own anyway. It is now available on this web page for the price of 29.95€ plus a tax that depends on the country of purchase. Some countries which have unreasonable requirements for foreign online sellers such as India, Russia and South Korea have been geoblocked. Sorry about that. However you can still buy the book if you are traveling outside the country in question, but in that case all tax responsibilities for importing fall upon you.

What if you don't care about books?

I don't have a Patreon or any other crowdfunding thing ongoing, because of the considerable legal uncertainties of running a donation based service for the public good in Finland. Selling digital goods for money is fine, so this is a convenient way for people to support my work on Meson financially.

Will the book be made available under a free license?

No. We already have one set of free documentation on the project web site. Everyone is free to use and contribute that documentation. This book contains no text from the existing documentation, it is all new and written from scratch.

Is it available as a hard copy?

No, the only available format is PDF. This is both to save trees and because international shipping of physical items is both time consuming and expensive.

Getting review copies

If you are a journalist and wish to write a review of the book for a publication, send me an email and I'll provide you with a free review copy.

When was the book first made public?

It was announced at the very beginning of my LCA2020 talk. See it for yourself in the embedded video below.

Can you post about this on your favourite social media site / news aggregator / etc?

Yes, by all means. It is hard to get visibility without so I appreciate all the help I can get.

What was that site's URL again?


maanantai 30. joulukuuta 2019

How about not stabbing ourselves in the leg with a rusty fork?

Corporations are funny things. Many things no reasonable person would do on their own are done every day in thousands of business conglomerates around the world. With pride even. Let us consider as an arbitrary example a corporation where every day is started by employees stabbing themselves in the leg with a rusty fork. This is (I hope) not actually done for real, but there could be a company out there where this is the daily routine.

If you think that such a thing could possibly never happen, congratulations on having never worked in a big corporation. Stick with that if you can!

When faced with this kind of pointless and harmful routine, one might suggest not doing it any more or replacing it with some other, more useful procedure. This does not succeed, of course, but that is not the point. The reasons you get back are the interesting thing, because they will tell you what kind of manager and coworkers you are dealing with. Here are some possible options, can you think of more?

The survivor fallacist

This is a multi-billion dollar company. If stabbing oneself in the leg was bad, as you seem to claim, we could not have succeeded.

The minimum energy spender

It would take too much work to get this changed. Just bite the bullet and do it every morning. You're better off this way.

The blame shifter

This is mandated by our head office, we can't do anything about this even if we wanted to.

The metric optimizer

Our next year's bonus metric will measure the number of leg stabbings reduced that year. We must get as many of them in this year as we possibly can.

The traditionalist

We have always done this. We must always do it.

The cornered animal

How dare you! Do you have any idea how much work it is to get pre-rusted forks? They are all made of stainless steel nowadays. Your derogatory insinuations are a slap on the face of all people working to keep this system running!

The folklorist

This is a commonly accepted best practice in software companies, thus we should do it also.

The brainwashee

This is actually a great invention. Getting a nice jolt of adrenaline first thing in the morning really wakes you up and gives you focus for the entire day. Try it for a month or two! You'll see.

The control freak messiah

This procedure was put in place by the founder/CEO. You do not challenge his choices if you know what is good for you.

The team spiritist

If you don't stab yourself in the leg, you are setting up a very bad example that demoralizes everybody else who do their part diligently.

And finally the (sadly) most common one

Our product is special.

lauantai 28. joulukuuta 2019

What can clang-format teach us about the human condition?

Most people who do programming have taken part in at least one code formatting war. Usually these come about when companies want to standardise their code bases and thus want everything formatted according to a single style. Style wars, much like real wars, are not pleasant places to be in. They cause havoc and destruction, make reasonable people into life-long sworn enemies and halt work on anything useful.

In a typical style argument statements like the following are often thrown about:

  • Indentation should be done with tabs, because everyone can set tab width to whatever they want in their editor.
  • The opening brace must be on the same line as its preceding clause. This saves vertical space and thus makes the code more readable.
  • The opening brace must be on its own line. This makes code blocks stand out better and thus makes the code more readable.
  • When laying things like arguments vertically, the separating comma must be at the beginning of the line rather than the end. In this way when you add or remove an entry, the diff is always only one line.
  • In a declaration like int *bob, the asterisk must be next to the variable name, because that is what it binds to.
  • In a declaration like int* bob, the asterisk must be next to the type name, because "pointerness" is logically a feature of the type, not the variable.
  • Class variables must begin with m_ so they stand out better.
  • Class variables must not be separated with a prefix. The syntax highlighter will already draw them in a different color and if you have so many variables in your methods that you can't immediately tell which is which, your methods are too big and must be split.
  • Et cetera, et cetera, ad infinitum, ad nauseaum.
It is unknown when code formatting wars first began. Given that FORTRAN was the first real programming language with an actual syntax and was first released in 1957, the answer probably is "way, way before that". A reasonable guess would be the first or second design meeting on the syntax. Fighting over code style kept raging for almost sixty years after that. The arguments were the same, the discussion was the same, no progress was ever made. Then clang-format was introduced and suddenly everything changed.

This was surprising, because automatic code formatters had existed for decades and clang-format was "the same, just slightly better". Yet it made this problem mostly go away. Why?

Enter the human element

With all existing formatters it was fairly easy to find code where it failed. C macros were especially treacherous in this regard. This meant that either one needed to manually add (and update) comments that disabled formatting for some blocks or the formatter was run only every now and then by hand and the result had to be inspected and fixed by hand after the fact. With clang-format this manual work went effectively to zero. You could just run it at any time, even automatically before every commit. In a weird kind of backwards way once we had the correct solution, we could finally understand what the real problem was.

Every programmer writes code in their own way. Maybe they put braces on the same line, maybe not. Maybe they indent with spaces, maybe not. The details don't matter, the real point is that the writing code in this style is effortless. It just flows from your brain to the screen. Coding in any other style means spending brain energy on either typing in some non-natural style or fixing the code afterwards. This is manual and tedious work, just the kind that programmers hate with a passion. Thus when the threat of an externally mandated code style appears, the following internal monologue takes place:
If they choose a style different than mine, then I will forever have to write in a style that is unnatural to me. This is tedious. However if I spend some energy now and convince everyone else to use my style, then I can keep on doing what I have been doing thus far. All I have to do is to factually explain why my chosen style is the best, and since other programmers are rational whey will understand my point, agree and adopt my chosen style.
Unfortunately everyone else participating in the debate has the exact same idea and things end in a stalemate almost immediately. The sunk-cost fallacy ensures that once a person has publicly committed to a style choice, they will never budge from it.

Note the massive dichotomy at play here. The real reason people have for any style choice is "this is what I have gotten used to" but when they debate their chosen style they always, always use reasoning that aims to be objective and scientific. At this point you might want to pause and reread the sample arguments listed above. They follow this reasoning exactly and most claim to improve some real world objective metric such as readability. They are also all lies. These are all post-rationalising arguments, invented after the fact to make the opinion you already have sound as good as possible. They are not the real reason. They have never been the real reason. It is unlikely they will ever be the real reason. But the debate is carried on as if they were the real reason. This is why it will never end.

The lengths people are willing to go to in their post-rationalising arguments is nothing short of astounding. In this video on indenting with tabs vs spaces many tab advocates say that indenting with tabs is better because "you only need to press tab once rather than press space multiple times". Every single programmer's text editor since 1985 (possibly 1975 and potentially even 1965) has had the feature where pressing the tab key does the logically equivalent indent with spaces. Using this as an argument only shows that you have not done even the most minimal of thinking on the issue, but instead just have already made up your mind and don't want to even consider changing it.

This is why code style discussions never go anywhere. They are not about bringing people together to find the best possible choice. They are about trying to make other people submit to your will by repeatedly bashing them on the head with your style guide. This does not work because the average programmer's head is both thicker and more durable than the average style guide.

tiistai 17. joulukuuta 2019

Notes on tax issues on selling digital goods internationally

Note: This blog post should not be considered tax, legal or any other sort of advice. There are no guarantees of any kind, even that any of the information below is correct. Consult a qualified professional before embarking on any international business ventures.

Some tax requirements for selling pretty much anything internationally (that I found out by googling and looking up random government web sites)

Nowadays it is easy to start a web store and sell products such as digital downloads to any country in the world. You might think that simply paying appropriate taxes in your own country would be enough. It's not. There are cases where you need to pay taxes or fees to other countries as well, specifically the ones you sell your products to. Surprisingly this can be the case even for very small amounts of money.

There seem to be three main cases: USA, the EU and individual countries. Let's go through them in increasing order of difficulty.

Individual countries

Most countries have a requirement that if you sell digital goods to them you need to register in said country, collect the appropriate amount of tax on your sales and then report and pay it. Most countries have a lower limit under which you don't need to do anything. This is usually on the order of 10 000 to 100 000 euros per year, which small scale operations won't ever reach. Unfortunately in some countries this limit is zero. That is, if your sales are even one euro, you need to register and do the full bureaucratic dance. These countries include Albania, Russia, South Korea and India among others. Lists of limits per country can be found online. Be careful when reading them, though, as web pages can get out of date quickly.

For small businesses the only realistic choice is to geoblock countries where the tax limit is zero. Dealing with the hassles is just not worth it. This is fairly easy, as most payment providers have good geoblocking tools.

VAT in the European Union

In the EU you can do the same registration to each country as for individual countries discussed above. However there is also a new, simplified system for digital services called VAT MOSS. The idea there is that you don't need to register to each country, instead you can report VAT purchases to your own tax authorities and they take care of the rest. This is highly convenient, because you can then sell to every EU member state but only have to deal with the bureaucracy of one of them.

There is a similar thing for non-EU companies, but I have not looked at how it works in detail for obvious reasons. Just note that the registration limit for EU is also zero, meaning if you must register if you sell anything at all to the EU. Sadly this means that for some people geoblocking all of EU is an entire reasonable thing to do.

Sales tax in the USA

The good news is that the USA does not have a federal sales tax. The bad news is that each state has its own laws on sales taxes. Whether or not you need to pay sales taxes on a given state depends on whether you have a "nexus" in the state. This used to mean something like an office. However then buying stuff over the Internet happened and now having a nexus simply means selling more than a given threshold's worth of goods or services to people in the state. Lists of these limits per state can be found online as well.

This is where things get unpleasant for small players. The limit for Kansas is zero, meaning any sales to Kansas means you have to register, gather sales tax and pay it to the state authorities. Other states have reasonable limits such as 100 000 dollars, but the obligation is also triggered if you have more than 200 sales events in total regardless of their value. This is a lot easier to trigger by accident.

Unfortunately payment processors don't seem to provide state-based geoblocking. Thus if you enable sales to the USA and that leads to even one sale in Kansas, you just got hit by a bunch of legal requirements. Dealing with all of these is not really feasible for small operations. On the other hand blocking all of USA means losing a fairly large chunk of your revenue.

To keep things from being too simple, there are web pages that claim that having an "economic nexus" is actually different for digital products. Based on that page Kansas does not have a sales tax at all for purely digital products, so you could sell arbitrary amount of products there without needing to pay any sales tax. Which one of these is correct? I don't actually know. I have spent the entire day reading up on international tax laws and now my head hurts and I just want to close the computer and have a drink.

What does this mean for tipping services like Patreon, crowdfunding et al?

Again: I don't really know. However a case can reasonably be made (at least by a tax collector that wants to get your money) that paying through one of those platforms constitutes a "sale of services" or something similar and thus subject to a sales tax. For example Patreon's documentation page states that they take care of paying VAT for EU customers but that customers in the USA need to take care of their tax responsibilities themselves. Presumably this also applies to all other countries.

Thus it may be the case that everyone who is running a tipping service and has taken any money from countries or states with a zero limit on sales taxes have unexpectedly been burdened with legal responsibilities to tens of different tax offices around the world.

In any case all of this means that things like gathering funds for open source development is a lot more complicated than appears at first glance.

lauantai 7. joulukuuta 2019

There are (at least) three distinct dependency types

Using dependencies is one of the main problems in software development today. It has become even more complicated with the recent emergence of new programming languages and the need to combine them with existing programs. Most discussion about it has been informal and high level, so let's see if we can make it more disciplined and how different dependency approaches work.

What do we mean when we say "work"?

In this post we are going to use the word "work" in a very specific way. A dependency application is said to work if and only if we can take two separate code projects where one uses the other and use them together without needing to write special case code. That is, we should be able to snap the two projects together like Lego. If this can be done to arbitrary projects with a success rate of more than 95%, then the approach can be said to work.

It should be especially noted that "I tried this with two trivial helloworld projects and it worked for me" does not fulfill the requirements of working. Sadly this line of reasoning is used all too often in online dependency discussions, but it is not a response that holds any weight. Any approach that has not been tested with at least tens (preferably hundreds) of packages does not have enough real world usage experience to be taken seriously.

The phases of a project

Every project has three distinctive phases on its way from source code to a final executable.
  1. Original source in the source directory
  2. Compiled build artifacts in the build directory
  3. Installed build artifacts on the system
In the typical build workflow step 1 happens after you have done a Git checkout or equivalent. Step 2 happens after you have successfully built the code with ninja all. Step 3 happens after a ninja install.

The dependency classes

Each of these phases has a corresponding way to use dependencies. The first and last ones are simple so let's examine those first.

The first one is the simplest. In a source-only world you just copy the dependency's source inside your own project, rewrite the build definition files and use it as if it was an integral part of your own code base. The monorepos used by Google, Facebook et al are done in this fashion. The main downsides are that importing and updating dependencies is a lot of work.

The third approach is the traditional Linux distro approach. Each project is built in isolation and installed either on the system or in a custom prefix. The dependencies provide a pkg-config file explaining which defines both the dependency and how it should be used. This approach is easy to use and scales really well, the main downside being that you usually end up with multiple versions of some dependency libraries on the same file system, which means that they will eventually get mixed up and crash in spectacular but confusing ways.

The second approach

A common thing people want to do is to mix two different languages and build systems in the same build directory. That is, to build multiple different programming languages with their own build systems intermixed so that one uses the built artifacts of the other directly from the build dir.

This turns out to be much, much, much more difficult than the other two. But why is that?

Approach #3 works because each project is clearly separated and the installed formats are simple, unambiguous and well established. This is not the case for build directories. Most people don't know this, but binaries in build directories are not the same as the installed ones Every build system conjures its own special magic and does things slightly differently. The unwritten contract has been that the build directory is each build system's internal implementation detail. They can do with it whatever they want, just as long as after install they provide the output in the standard form.

Mixing the contents of two build systems' build directories is not something that "just happens". Making one "just call" the other does not work simply because of the N^2 problem. For example, currently you'd probably want to support C and C++ with Autotools, CMake and Meson, D with Dub, Rust with Cargo, Swift with SwiftPM, Java with Maven (?) and C# with MSBuild. That is already up to 8*7 = 56 integrations to write and maintain.

The traditional way out is to define a data interchange protocol to declare build-dir dependencies. This has to be at least as rich in semantics as pkg-config, because that is what it is: a pkg-config for build dirs. In addition to that you need to formalise all the other things about setup and layout that pkg-config gets for free by convention and in addition you need to make every build system adhere to that. This seems like a tall order and no-one's really working on it as far as I know.

What can we do?

If build directories can't be mixed and system installation does not work due to the potential of library mixups, is there anything that we can do? It turns out not only that we can, but that there is already a potential solution (or least an approach for one): Flatpak.

The basic idea behind Flatpak is that it defines a standalone file system for each application that looks like a traditional Linux system's root file system. Dependencies are built and installed there as if one was installing them to the system prefix. What makes this special is that the filesystem separation is enforced by the kernel. Within each application's file system only one version of any library is visible. It is impossible to accidentally use the wrong version. This is what traditional techniques such as rpath and LD_LIBRARY_PATH have always tried to achieve, but have never been able to do reliably. With kernel functionality this becomes possible, even easy.

What sets Flatpak apart from existing app container technologies such as iOS and Android apps, UWP and so on is its practicality. Other techs are all about defining new, incompatible worlds that are extremely limited and invasive (for example spawning new processes is often prohibited). Flatpak is not. It is about making the app environment look as much as possible like the enclosing system. In fact it goes to great lengths to make this work transparently and it succeeds admirably. There is not a single developer on earth who would tolerate doing their own development inside a, say, iOS app. It is just too limited. By contrast developing inside Flatpak is not only possible and convenient, but something people already do today.

The possible solution, then, is to shift the dependency consumption from option 2 to option 3 as much as possible. It has only one real new requirement: each programming language must have a build system agnostic way of providing prebuilt libraries. Preferably this should be pkg-config but any similar neutral format will do. (For those exclaiming "we can't do that, we don't have a stable ABI", do not worry. Within the Flatpak world there is only one toolchain, system changes cause a full rebuild.)

With this the problem is now solved. All one needs to do is to write a Flatpak builder manifest that builds and installs the dependencies in the correct order. In this way we can mix and match languages and build systems in arbitrary combinations and things will just work. We know it will, because the basic approach is basically how Debian, Fedora and all other distros are already put together.

maanantai 25. marraskuuta 2019

Process invocation will forever be broken

Invoking new processes is, at its core, a straightforward operation. Pretty much everything you need to know to understand it can be seen in the main declaration of the helloworld program:

#include<stdio.h>

int main(int argc, char **argv) {
    printf("Hello, world.\n");
    return 0;
}

The only (direct) information passed to the program is an array of strings containing its (command line) arguments. Thus it seems like an obvious conclusion that there is a corresponding function that takes an executable to run and an array of strings with the arguments. This turns out to be the case, and it is what the exec family of functions do. An example would be execve.

This function only exists on posixy operating systems, it is not available on Windows. The native way to start processes on Windows is the CreateProcess function. It does not take an array of strings, instead it takes a string:

BOOL CreateProcessA(
  LPCSTR lpApplicationName,
  LPSTR  lpCommandLine,
  ...

The operating system then internally splits the string into individual components using an algorithm that is not at all simple or understandable and whose details most people don't even know.

Side note: why does Windows behave this way?

I don't know for sure. But we can formulate a reasonable theory by looking in the past. Before Windows existed there was DOS, and it also had a way of invoking processes. This was done by using interrupts, in this case function 4bh in interrupt 21h. Browsing through online documentation we can find a relevant snippet:

Action: Loads a program for execution under the control of an existing program. By means of altering the INT 22h to 24h vectors, the calling prograrn [sic] can ensure that, on termination of the called program, control returns to itself.
On entry: AH = 4Bh
AL = 0: Load and execute a program
AL = 3: Load an overlay
DS.DX = segment:offset of the ASCIIZ pathname
ES:BX = Segment:offset of the parameter block
Parameter block bytes:
0-1: Segment pointer to envimmnemnt [sic] block
2-3: Offset of command tail
4-5: Segment of command tail

Here we see that the command is split in the same way as in the corresponding Win32 API call, into ta command to execute and a single string that contains the arguments (the command tail, though some sources say that this should be the full command line). This interrupt handler could take an array of strings instead, but does not. Most likely this is because it was the easiest thing to implement in real mode x86 assembly.

When Windows 1.0 appeared, its coders probably either used the DOS calls directly or copied the same code inside Windows' code base for simplicity and backwards compatibility. When the Win32 API was created they probably did the exact same thing. After all, you need the single string version for backwards compatibility anyway, so just copying the old behaviour is the fast and simple thing to do.

Why is this behaviour bad?

There are two main use cases for invoking processes: human invocations and programmatic invocations. The former happens when human beings type shell commands and pipelines interactively. The latter happens when programs invoke other programs. For the former case a string is the natural representation for the command, but this is not the case for the latter. The native representation there is an array of strings, especially for cross platform code because string splitting rules are different on different platforms. Implementing shell-based process invocation on top of an interface that takes an array of strings is straightforward, but the opposite is not.

Often command lines are not invoked directly but are instead passed from one program to another, stored to files, passed over networks and so on. It is not uncommon to pass a full command line as a command line argument to a different "wrapper" command and so on. An array of string is trivial to pass through arbitrarily deep and nested scenarios without data loss. Plain strings not so much. Many, many, many programs do command string splitting completely wrong. They might split it on spaces because it worksforme on this machine and implementing a full string splitter is a lot of work (thousands of lines of very tricky C at the very least). Some programs don't quote their outputs properly. Some don't unquote their inputs properly. Some do quoting unreliably. Sometimes you need to know in advance how many layers of unquoting your string will go through in advance so you can prequote it sufficiently beforehand (because you can't fix any of the intermediate blobs). Basically every time you pass commands as strings between systems, you get a parsing/quoting problem and a possibility for shell code injection. At the very least the string should carry with it information on whether it is a unix shell command line or a cmd.exe command line. But it doesn't, and can't.

Because of this almost all applications that deal with command invocation kick the can down the road and use strings rather than arrays, even though the latter is the "correct" solution. For example this is what the Ninja build system does. If you go through the rationale for this it is actually understandable and makes sense. The sad downside is that everyone using Ninja (or any such tool) has to do command quoting and parsing manually and then ninja-quote their quoted command lines.

This is the crux of the problem. Because process invocation is broken on Windows, every single program that deals with cross platform command invocation has to deal with commands as strings rather than an array of strings. This leads to every program using commands as strings, because that is the easy and compatible thing to do (not to mention it gives you the opportunity to close bugs with "your quoting is wrong, wontfix"). This leads to a weird kind of quantum entanglement where having things broken on one platform breaks things on a completely unrelated platform.

Can this be fixed?

Conceptually the fix is simple: add a new function, say, CreateProcessCmdArray to Win32 API. It is identical to plain CreateProcess except that it takes an array of strings rather than a shell command string. The latter can be implemented by running Windows' internal string splitter algorithm and calling the former. Seems doable, and with perfect backwards compatibility even? Sadly, there is a hitch.

It has been brought to my attention via unofficial channels [1] that this will never happen. The people at Microsoft who manage the Win32 API have decreed this part of the API frozen. No new functionality will ever be added to it. The future of Windows is WinRT or UWP or whatever it is called this week.

UWP is conceptually similar to Apple's iOS application bundles. There is only one process which is fully isolated from the rest of the system. Any functionality that need process isolation (and not just threads) must be put in its own "service" that the app can then communicate with using RPC. This turned out to be a stupid limitation for a desktop OS with hundreds of thousands of preexisting apps, because it would require every Win32 app using multiple processes to be rewritten to fit this new model. Eventually Microsoft caved under app vendor pressure and added the functionality to invoke processes into UWP (with limitations though). At this point they had a chance to do a proper from-scratch redesign for process invocation with the full wealth of knowledge we have obtained since the original design was written around 1982 or so. So can you guess whether they:
  1. Created a proper process invocation function that takes an array of strings?
  2. Exposed CreateProcess unaltered to UWP apps?
You guessed correctly.

Bonus chapter: msvcrt's execve functions

Some of you might have thought waitaminute, the Visual Studio C runtime does ship with functions that take string arrays so this entire blog post is pointless whining. This is true, it does provide said functions. Here is a pseudo-Python implementation for one of them. It is left as an exercise to the reader to determine why it does not help with this particular problem:

def spawn(cmd_array):
    cmd_string = ' '.join(cmd_array)
    CreateProcess(..., cmd_string, ...)

[1] That is to say, everything from here on may be completely wrong. Caveat lector. Do not quote me on this.

tiistai 19. marraskuuta 2019

Some intricacies of ABI stability

There is a big discussion ongoing in the C++ world about ABI stability. People want to make a release of the standard that does a big ABI break, so a lot of old cruft can be removed and made better. This is a big and arduous task, which has a lot of "fun" and interesting edge, corner and hypercorner cases. It might be interesting to look at some of the lesser known ones (this post is not exhaustive, not by a long shot). All information here is specific to Linux, but other OSs should be roughly similar.

The first surprising thing to note is that nobody really cares about ABI stability. Even the people who defend stable ABIs in the committee do not care about ABI stability as such. What they do care about is that existing programs keep on working. A stable ABI is just a tool in making that happen. For many problems it is seemingly the only tool. Nevertheless, ABI stability is not the end goal. If the same outcome can be achieved via some other mechanism, then it can be used instead. Thinking about this for a while leads us to the following idea:
Since "C++ness" is just linking against libstdc++.so, could we not create a new one, say libstdc++2.so, that has a completely different ABI (and even API), build new apps against that and keep the old one around for running old apps?
The answer to this questions turns out to be yes. Even better, you can already do this today on any recent Debian based distribution (and probably most other distros too, but I have not tested). By default the Clang C++ compiler shipped by the distros uses the GNU C++ standard library. However you can install the libc++ stdlib via system packages and use it with the -stdlib=libc++ command line argument. If you go even deeper, you find that the GNU standard library's name is libstdc++.so.6, meaning that it has already had five ABI breaking updates.

So … problem solved then? No, not really.

Problem #1: the ABI boundary

Suppose you have a shared library built against the old ABI that exports a function that looks like this:

void do_something(const std::unordered_map<int, int> &m);

If you build code with the new ABI and call this function, the bit representation of the unordered map causes problems. The caller has a pointer to a bunch of bits in the new representation whereas the callee expects bits in the old representation. This code compiles and links but will invoke UB at runtime when called and, at best, crash your app.

Problem #2: the hidden symbols

This one is a bit complicated and needs some background information. Suppose we have a shared library foo that is implemented in C++ but exposes a plain C API. Internally it makes calls to the C++ standard library. We also have a main program that uses said library. So far, so good, everything works.

Let's add a second shared library called bar that also implemented in C++ and exposes a C API. We can link the main app against both these libraries and call them and everything works.

Now comes the twist. Let's compile the bar library against a new C++ ABI. The result looks like this:


A project mimicing this setup can be obtained from this Github repo. In it the abi1 and abi2 libraries both export a function with the same name that returns an int that is either 1 or 2. Libraries foo and bar check the return value and print a message saying whether they got the value they were expecting. It should be reiterated that the use of the abi libraries is fully internal. Nothing about them leaks to the exposed interface. But when we compile and run the program that calls both libraries, we get the following output snippet:

Foo invoked the correct ABI function.
Bar invoked the wrong ABI function.

What has happened is that both libraries invoked the function from abi1, which means that in the real world bar would have crashed in the same way as in problem #1. Alternatively both libraries could have called abi2, which would have broken foo. Determining when this happens is left as an exercise to the reader.

The reason this happens is that the functions in abi1 and abi2 have the same mangled name and the fact that symbol lookup is global to a process. Once any given name is determined, all usages anywhere in the same process will point to the same entity. This will happen even for non-weak symbols.

Can this be solved?

As far as I know, there is no known real-world solution to this problem that would scale to a full operating system (i.e. all of Debian, FreeBSD or the like). If there are any university professors reading this needing problems for your grad students, this could be one of them. The problem itself is fairly simple to formulate: make it possible to run two different, ABI incompatible C++ standard libraries within one process. The solution will probably require changes in the compiler, linker and runtime loader. For example, you might extend symbol resolution rules so that they are not global, but instead symbols from, say library bar would first be looked up in its direct descendents (in this case only abi2) and only after that in other parts of the tree.

To get you started, here is one potential solution I came up with while writing this post. I have no idea if it actually works, but I could not come up with an obvious thing that would break. I sadly don't have the time or know-how to implement this, but hopefully someone else has.

Let's start by defining that the new ABI is tied to C++23 for simplicity. That is, code compiled with -std=c++23 uses the new ABI and links against libstdc++.so.7, whereas older standard versions use the old ABI. Then we take the Itanium ABI specification and change it so that all mangled names start with, say, _^ rather than _Z as currently. Now we are done. The different ABIs mangle to different names and thus can coexist inside the same process without problems. One would probably need to do some magic inside the standard library implementations so they don't trample on each other.

The only problem this does not solve is calling a shared library with a different ABI. This can be worked around by writing small wrapper functions that expose an internal "C-like" interface and can call external functions directly. These can be linked inside the same library without problems because the two standard libraries can be linked in the same shared library just fine. There is a bit of a performance and maintenance penalty during the transition, but it will go away once all code is rebuilt with the new ABI.

Even with this, the transition is not a light weight operation. But if you plan properly ahead and do the switch, say, once every two standard releases (six years), it should be doable.