Friday, November 16, 2018

The performance impact of zeroing raw memory

When you create a new variable (in C, C++ and other languages) or allocate a block of memory the value is undefined. That is, whatever bit pattern happened to be in the raw memory location at the time. This is faster than initialising all memory (which languages such as Java do) but it is also unsafe and can lead to bugs, such as use-after-free issues.

There have been several attempts to change this behaviour and require that compilers would initialize all memory to a known value, usually zero. This is always rejected with a statement like "that would cause a performance degradation fo unknown size" and the issue is dropped. This is not very scientific so let's see if we could get at least some sort of a measurement for this.

The method

The overhead for uninitialized variables is actually fairly difficult to measure. Compilers don't provide a flag to initialize all variables to zero. Thus measuring this would require compiler hacking, which is a ton of work. An alternative would be to write a clang-tidy plugin and add a default initialization to zero for all variables that don't have a initialization clause already. This is also fairly involved, so let's not do this.

The impact of dynamic memory turns out to be fairly straightforward to measure. All we need to do is to build a shared library with custom overrides for malloc, free and memalign, and LD_PRELOAD it to any process we want to measure. The sample code can be found in this Github repo.

Measurements

We did two measurements. The first one was running Python's pystone benchmark. There was no noticeable difference between zero initialization and no initialization.

The second measurement consisted of compiling a simple C++ iostream helloworld application with optimizations enabled. The results for this experiment were a lot more interesting. Zeroing all memory on malloc made the program 2% slower. Zeroing the memory on both allocation and free (to catch use-after-free bugs) made the program 3.6% slower.

A memory zeroing implementation inside malloc would probably have a smaller overhead, because there are cases where you don't need to explicitly overwrite the memory, for example when the allocation is done behind the scenes via mmap/munmap.

Monday, November 12, 2018

Compile any C++ program 10× faster with this one weird trick!

tl/dr: Is it unity builds? Yes.

I would like to know more!

At work I have to compile a large code base from scratch fairly often. One of the components it has is a 3D graphics library. It takes around 2 minutes 15 seconds to compile using an 8 core i7. After a while I got bored with this and converted the system to use a unity build. In all simplicity what that means is that if you have a target consisting of files foo.cpp, bar.cpp, baz.cpp etc you create a cpp file with the following contents:

#include<foo.cpp>
#include<bar.cpp>
#include<baz.cpp>

Then you would tell the build system to build that instead of the individual files. With this method the compile time dropped down to 1m 50s which does not seem like that much of a gain but the compilation used only one CPU core. The remaining 7 are free for other work. If the project had 8 targets of roughly the same size, building them incrementally would take 18 minutes. With unity builds they would take the exact same 1m 50s assuming perfect parallelisation, which happens fairly often in practice.

Wait, what? How is this even?

The main reason that C++ compiles slowly has to do with headers. Merely including a few headers in the standard library brings in tens or hundreds of thousands of lines of code that must be parsed, verified, converted to an AST and codegenerated in every translation unit. This is extremely wasteful especially given that most of that work is not used but is instead thrown away.

With an Unity build every #include is processed only once regardless of how many times it is used in the component source files.

Basically this amounts to a caching problem, which is one of the two really hard problems in computer science in addition to naming things and off by one errors.

Why is this not used by everybody then?

There are several downsides and problems. You can't take any old codebase and compile it as a unity build. The first blocker is that things inside source files leak into other ones since they are all textually included one after the other.. For example if you have two files and each of them declares a static function with the same name, it will lead to name clashes and a compilation failure. Similarly things like using namespace std declarations leak from one file to another causing havoc.

But perhaps the biggest problem is that every recompilation takes the same time. An incremental rebuild where one file has changed takes a few seconds or so whereas a unity builds takes the full 1m 50s every time. This is a major roadblock to iterative development and the main reason unity builds are not widely used.

A possible workflow with Meson

For simplicity let's assume that we have a project that builds and works with unity builds. Meson has an automatic unity build file generator that can be enabled by setting the value of the unity build option.

This solves the basic build problem but not the incremental one. However usually you'd develop only one target (be it a library, executable or module) and want to build only that one incrementally and everything else as a unity build. This can be done by editing the build definition of the target in question and adding an override option:

executable(..., override_options : ['unity=false'])

Once you are done you can remove the override from the build file to return everything back to normal.

How does this tie in with C++ modules?

Directly? Not in any way really. However one of the stated advantages of modules has always been faster build times. There are a few module implementations but there is very little public data on how they behave with real world codebases. During a CppCon presentation on modules Google's Chandler Carruth mentioned that in Google's code base modules resulted in 30% build time reduction.

It was not mentioned whether Google uses unity builds internally but they almost certainly don't (based on things such as this bug report on Bazel). If we assume that theirs is the fastest existing "classical" C++ build mechanism, which it probably is, the conclusion is that it is an order of magnitude slower than a unity build on the same source files. A similar performance gap would probably not be tolerated in any other part of the C++ ecosystem.

The shoemaker's children go barefoot.

Tuesday, November 6, 2018

Simple guide to designing pleasant web sites

When is it ok to...

Use infinite scrolling pagesnever
Steal the "/" key for your search widgetnever
Have a "website app" instead of a good web sitenever
Break functionality on the site to drive app usagenever
Autoplay audionever
Use 500 megs of ram for a web IRC (or equivalent)never
Use 100% CPU for animations you can't disablenever
Use 100% CPU for animations you can disablenever
Provide important information only as PDFnever
Have layouts with more pixels for chrome than contentnever
Run a news aggregator site that opens all links in new windowsnever
Block main page from showing until all ad trackers are loadednever
Autoplay video clipsonly if you are Vimeo, Youtube or a similar site

Saturday, November 3, 2018

Some use cases for shared linking and ABI stability

A recent trend in language design and devops deployment has been to not use shared libraries. Instead every application is rebuilt and statically linked for maximum performance. This is highly convenient in many cases. Some people even go as far as to declare shared linking, and with it any ABI stability, a dead relic of the past that is only unnecessary but actively harmful because maintaining ABI stability slows down language changes and renewal.

This blog post was not written to argue whether this is true or not. Instead it is meant to list many reasons and use cases where shared libraries and ABI stability are useful and which would be hard, or even impossible, to achieve by relying only on static linking.

Many of the issues listed here are written from the perspective of a modern Linux distribution, especially Debian. However I am not a Debian developer so the following is not any sort of an official statement, just my writings as an individual.

Guaranteed update propagation

Debian consists of thousands of packages. Each package's state is managed by a package maintainer. Each manager typically maintains between one and a handful of packages, so there are hundreds of them. Each one of them works in relative isolation from others. That is, they can upload updates to packages at their own pace. In fact, it is an important part of Debian's social structure that no-one can be forced to do any particular task.

On the other hand, Debian is also very strict about security. If a vulnerability is found in, say, a popular encryption library then it must be possible for one single person to update the encryption code in every single package that uses it, even indirectly. With a stable ABI and shared libraries, this can be done easily. Updating the dependency package (and possibly rebooting the machine) guarantees that every package on the system uses the new library. If packages were statically linked, each package would have to be rebuilt and reuploaded. This would require hundreds of people around the world to work in a coordinated fashion. In a volunteer based system this is not possible, especially for cases that require an embargo.

Update server bandwidth savings

The amount of bandwidth it takes to run a Linux distribution mirror is substantive. As we saw above, it is possible to update single packages which make downloads fairly small. If everything was statically linked then every library update would mean downloading the full rebuilt binaries of every affected package. This means a 10x to 100x increase in bandwidth requirements. Distro mirrors are already quite heavily loaded and probably could not handle this sort of increase in traffic.

Download bandwidth savings

Most of the population in the world does not have a direct 10GB Ethernet connection for their personal use. In fact there are many people who only have 2G connection at best and even that is sporadic. There are also many servers that have very poor Internet connections, such as scientific instruments and credit card payment terminals in remote cities. Getting updates to these machines is difficult even now. If update sizes ballooned in size, it might become completely infeasible.

Shipping prebuilt middleware

There are many providers of middleware (such as in computer games) that will only provide their code as prebuilt libraries (usually shared, because they are harder to reverse engineer). They will not and can not ever ship their source code to customers because that contains all their special sauce. This entire business model relies on a stable ABI.

Software certification

I don't know have personal experience about this so the following entry might be completely false. However it is based on best effort information I had. If you have first hand experience and can either confirm or deny this, please post a comment to this article.

In highly regulated business sectors the problem of certification often comes up. Basically what this means is that each executable is put through extensive testing cycle. If it passes then it is certified and can be used in production. Specifically, only that exact binary can be used. Any changes to the code means that the program must be re-certified. This is a time consuming and extremely expensive process.

It may be that the certification cycle is different for the operating system component. Thus applying OS updates provided by the vendor may be faster and cheaper. As long as they maintain ABI stability, the actual program does not need to be changed removing the need to re-certify it.

Extension modules

Suppose you create a program that provides an extension or plugin interface to third party code. Examples include the modding interface of many games and, as an extreme example, the entire Eclipse IDE. Supporting this without needing to provide third party extensions as source (and shipping a compiler with your program) requires a stable ABI.

Low barrier to entry

One of the main downsides of rebuilding everything from source all the time is the amount of resources it takes. For many this is not a problem and when asked about it may even snootily reply with "just buy more machines from AWS".

One of the strong motivations of the free and open source movement has been enablement and empowering. That is, making it as easy as possible for as many people as possible to participate. There are many people in the world whose only computer is an old laptop or possibly even just a Raspberry Pi. In the current model it is possible for take any part of the system and hack on it in isolation (except maybe something like Chromium). If we go to a future where participating in software development requires access to a data center, these people are prevented from contributing.

Supporting slow platforms

One of the main philosophical points of Debian is that every supported architecture must be self hosting. That is, packages for Arm must be built on Arm, Mips packages must be built on Mips and so on. Self hosting is an important goal, because it proves the system works and is self-sustaining in ways that simply using cross built packages does not.

Currently it takes a lot of time to do a full archive rebuild using any of the slower architectures, but it is still feasible. If the amount of work needed to do a full rebuild grows by 10 or 100, it is no longer achievable. Thus the only platforms that could reasonably self-host would be x86, Power, s390x and possibly arm64.

Supporting old binaries

There are many cases where a specific application binary must keep running even though the entire system around it changes. A good example of this are computer and console games. People have paid good money for games on Windows 7 (or Vista, or XP) and they expect them to keep working on Windows 10 as well, even on hardware that did not even exist back when the game was released. The only known solution to this are stable ABIs. The same problem happens with consoles such as PS4. Every single game released during its life cycle must run on all console system software versions released after the game, even without a network connection for downloading updates.

Errata

Since writing this article I have been told that any Developer may request a rebuild and reupload of a binary package and it happens automatically. So it is possible for one person to fix a package and have its dependents rebuilt, but it would still require lots of compute and bandwidth resources.