perjantai 16. marraskuuta 2018

The performance impact of zeroing raw memory

When you create a new variable (in C, C++ and other languages) or allocate a block of memory the value is undefined. That is, whatever bit pattern happened to be in the raw memory location at the time. This is faster than initialising all memory (which languages such as Java do) but it is also unsafe and can lead to bugs, such as use-after-free issues.

There have been several attempts to change this behaviour and require that compilers would initialize all memory to a known value, usually zero. This is always rejected with a statement like "that would cause a performance degradation fo unknown size" and the issue is dropped. This is not very scientific so let's see if we could get at least some sort of a measurement for this.

The method

The overhead for uninitialized variables is actually fairly difficult to measure. Compilers don't provide a flag to initialize all variables to zero. Thus measuring this would require compiler hacking, which is a ton of work. An alternative would be to write a clang-tidy plugin and add a default initialization to zero for all variables that don't have a initialization clause already. This is also fairly involved, so let's not do this.

The impact of dynamic memory turns out to be fairly straightforward to measure. All we need to do is to build a shared library with custom overrides for malloc, free and memalign, and LD_PRELOAD it to any process we want to measure. The sample code can be found in this Github repo.

Measurements

We did two measurements. The first one was running Python's pystone benchmark. There was no noticeable difference between zero initialization and no initialization.

The second measurement consisted of compiling a simple C++ iostream helloworld application with optimizations enabled. The results for this experiment were a lot more interesting. Zeroing all memory on malloc made the program 2% slower. Zeroing the memory on both allocation and free (to catch use-after-free bugs) made the program 3.6% slower.

A memory zeroing implementation inside malloc would probably have a smaller overhead, because there are cases where you don't need to explicitly overwrite the memory, for example when the allocation is done behind the scenes via mmap/munmap.

sunnuntai 11. marraskuuta 2018

Compile any C++ program 10× faster with this one weird trick!

tl/dr: Is it unity builds? Yes.

I would like to know more!

At work I have to compile a large code base from scratch fairly often. One of the components it has is a 3D graphics library. It takes around 2 minutes 15 seconds to compile using an 8 core i7. After a while I got bored with this and converted the system to use a unity build. In all simplicity what that means is that if you have a target consisting of files foo.cpp, bar.cpp, baz.cpp etc you create a cpp file with the following contents:

#include<foo.cpp>
#include<bar.cpp>
#include<baz.cpp>

Then you would tell the build system to build that instead of the individual files. With this method the compile time dropped down to 1m 50s which does not seem like that much of a gain but the compilation used only one CPU core. The remaining 7 are free for other work. If the project had 8 targets of roughly the same size, building them incrementally would take 18 minutes. With unity builds they would take the exact same 1m 50s assuming perfect parallelisation, which happens fairly often in practice.

Wait, what? How is this even?

The main reason that C++ compiles slowly has to do with headers. Merely including a few headers in the standard library brings in tens or hundreds of thousands of lines of code that must be parsed, verified, converted to an AST and codegenerated in every translation unit. This is extremely wasteful especially given that most of that work is not used but is instead thrown away.

With an Unity build every #include is processed only once regardless of how many times it is used in the component source files.

Basically this amounts to a caching problem, which is one of the two really hard problems in computer science in addition to naming things and off by one errors.

Why is this not used by everybody then?

There are several downsides and problems. You can't take any old codebase and compile it as a unity build. The first blocker is that things inside source files leak into other ones since they are all textually included one after the other.. For example if you have two files and each of them declares a static function with the same name, it will lead to name clashes and a compilation failure. Similarly things like using namespace std declarations leak from one file to another causing havoc.

But perhaps the biggest problem is that every recompilation takes the same time. An incremental rebuild where one file has changed takes a few seconds or so whereas a unity builds takes the full 1m 50s every time. This is a major roadblock to iterative development and the main reason unity builds are not widely used.

A possible workflow with Meson

For simplicity let's assume that we have a project that builds and works with unity builds. Meson has an automatic unity build file generator that can be enabled by setting the value of the unity build option.

This solves the basic build problem but not the incremental one. However usually you'd develop only one target (be it a library, executable or module) and want to build only that one incrementally and everything else as a unity build. This can be done by editing the build definition of the target in question and adding an override option:

executable(..., override_options : ['unity=false'])

Once you are done you can remove the override from the build file to return everything back to normal.

How does this tie in with C++ modules?

Directly? Not in any way really. However one of the stated advantages of modules has always been faster build times. There are a few module implementations but there is very little public data on how they behave with real world codebases. During a CppCon presentation on modules Google's Chandler Carruth mentioned that in Google's code base modules resulted in 30% build time reduction.

It was not mentioned whether Google uses unity builds internally but they almost certainly don't (based on things such as this bug report on Bazel). If we assume that theirs is the fastest existing "classical" C++ build mechanism, which it probably is, the conclusion is that it is an order of magnitude slower than a unity build on the same source files. A similar performance gap would probably not be tolerated in any other part of the C++ ecosystem.

The shoemaker's children go barefoot.

tiistai 6. marraskuuta 2018

Simple guide to designing pleasant web sites

When is it ok to...

Use infinite scrolling pagesnever
Steal the "/" key for your search widgetnever
Have a "website app" instead of a good web sitenever
Break functionality on the site to drive app usagenever
Autoplay audionever
Use 500 megs of ram for a web IRC (or equivalent)never
Use 100% CPU for animations you can't disablenever
Use 100% CPU for animations you can disablenever
Provide important information only as PDFnever
Have layouts with more pixels for chrome than contentnever
Run a news aggregator site that opens all links in new windowsnever
Block main page from showing until all ad trackers are loadednever
Autoplay video clipsonly if you are Vimeo, Youtube or a similar site

lauantai 3. marraskuuta 2018

Some use cases for shared linking and ABI stability

A recent trend in language design and devops deployment has been to not use shared libraries. Instead every application is rebuilt and statically linked for maximum performance. This is highly convenient in many cases. Some people even go as far as to declare shared linking, and with it any ABI stability, a dead relic of the past that is only unnecessary but actively harmful because maintaining ABI stability slows down language changes and renewal.

This blog post was not written to argue whether this is true or not. Instead it is meant to list many reasons and use cases where shared libraries and ABI stability are useful and which would be hard, or even impossible, to achieve by relying only on static linking.

Many of the issues listed here are written from the perspective of a modern Linux distribution, especially Debian. However I am not a Debian developer so the following is not any sort of an official statement, just my writings as an individual.

Guaranteed update propagation

Debian consists of thousands of packages. Each package's state is managed by a package maintainer. Each manager typically maintains between one and a handful of packages, so there are hundreds of them. Each one of them works in relative isolation from others. That is, they can upload updates to packages at their own pace. In fact, it is an important part of Debian's social structure that no-one can be forced to do any particular task.

On the other hand, Debian is also very strict about security. If a vulnerability is found in, say, a popular encryption library then it must be possible for one single person to update the encryption code in every single package that uses it, even indirectly. With a stable ABI and shared libraries, this can be done easily. Updating the dependency package (and possibly rebooting the machine) guarantees that every package on the system uses the new library. If packages were statically linked, each package would have to be rebuilt and reuploaded. This would require hundreds of people around the world to work in a coordinated fashion. In a volunteer based system this is not possible, especially for cases that require an embargo.

Update server bandwidth savings

The amount of bandwidth it takes to run a Linux distribution mirror is substantive. As we saw above, it is possible to update single packages which make downloads fairly small. If everything was statically linked then every library update would mean downloading the full rebuilt binaries of every affected package. This means a 10x to 100x increase in bandwidth requirements. Distro mirrors are already quite heavily loaded and probably could not handle this sort of increase in traffic.

Download bandwidth savings

Most of the population in the world does not have a direct 10GB Ethernet connection for their personal use. In fact there are many people who only have 2G connection at best and even that is sporadic. There are also many servers that have very poor Internet connections, such as scientific instruments and credit card payment terminals in remote cities. Getting updates to these machines is difficult even now. If update sizes ballooned in size, it might become completely infeasible.

Shipping prebuilt middleware

There are many providers of middleware (such as in computer games) that will only provide their code as prebuilt libraries (usually shared, because they are harder to reverse engineer). They will not and can not ever ship their source code to customers because that contains all their special sauce. This entire business model relies on a stable ABI.

Software certification

I don't know have personal experience about this so the following entry might be completely false. However it is based on best effort information I had. If you have first hand experience and can either confirm or deny this, please post a comment to this article.

In highly regulated business sectors the problem of certification often comes up. Basically what this means is that each executable is put through extensive testing cycle. If it passes then it is certified and can be used in production. Specifically, only that exact binary can be used. Any changes to the code means that the program must be re-certified. This is a time consuming and extremely expensive process.

It may be that the certification cycle is different for the operating system component. Thus applying OS updates provided by the vendor may be faster and cheaper. As long as they maintain ABI stability, the actual program does not need to be changed removing the need to re-certify it.

Extension modules

Suppose you create a program that provides an extension or plugin interface to third party code. Examples include the modding interface of many games and, as an extreme example, the entire Eclipse IDE. Supporting this without needing to provide third party extensions as source (and shipping a compiler with your program) requires a stable ABI.

Low barrier to entry

One of the main downsides of rebuilding everything from source all the time is the amount of resources it takes. For many this is not a problem and when asked about it may even snootily reply with "just buy more machines from AWS".

One of the strong motivations of the free and open source movement has been enablement and empowering. That is, making it as easy as possible for as many people as possible to participate. There are many people in the world whose only computer is an old laptop or possibly even just a Raspberry Pi. In the current model it is possible for take any part of the system and hack on it in isolation (except maybe something like Chromium). If we go to a future where participating in software development requires access to a data center, these people are prevented from contributing.

Supporting slow platforms

One of the main philosophical points of Debian is that every supported architecture must be self hosting. That is, packages for Arm must be built on Arm, Mips packages must be built on Mips and so on. Self hosting is an important goal, because it proves the system works and is self-sustaining in ways that simply using cross built packages does not.

Currently it takes a lot of time to do a full archive rebuild using any of the slower architectures, but it is still feasible. If the amount of work needed to do a full rebuild grows by 10 or 100, it is no longer achievable. Thus the only platforms that could reasonably self-host would be x86, Power, s390x and possibly arm64.

Supporting old binaries

There are many cases where a specific application binary must keep running even though the entire system around it changes. A good example of this are computer and console games. People have paid good money for games on Windows 7 (or Vista, or XP) and they expect them to keep working on Windows 10 as well, even on hardware that did not even exist back when the game was released. The only known solution to this are stable ABIs. The same problem happens with consoles such as PS4. Every single game released during its life cycle must run on all console system software versions released after the game, even without a network connection for downloading updates.

Errata

Since writing this article I have been told that any Developer may request a rebuild and reupload of a binary package and it happens automatically. So it is possible for one person to fix a package and have its dependents rebuilt, but it would still require lots of compute and bandwidth resources.

sunnuntai 14. lokakuuta 2018

Things Microsoft could do to make life of developers easier

A few weeks ago I was at CppCon. One of the presentations was about new stuff in the Visual Studio compiler. The presentation had this slide fairly early on.

Presentation screenshot saying the mission of the C++ team is to make the lives of all C++ developers on the planet better.


If that is truly their goal, then here are some things they could do. (Some not specifically about C++ but still related.)

Proper RPATH support

If you have a project that uses shared libraries and you want to run it directly from the build directory, then you really need to have rpath or something similar to it. A simple way of explaining it is that you add a piece of text inside an executable saying "when running me, search for foo.dll in directory ../baz/lib.

Since this is not natively supported, people need to resort to awful hacks to make it work:
  • adjusting the PATH envvar to contain the dirs where the dlls are (because PATH is used to look up dlls)
  • copy all files to the same directory before running
  • creating a manifest file defining an internal bundle, creating a subdirectory and copying all dependency dlls there
  • static linking everything
  • mandating a project layout where everything is in one subdirectory
All of these are nasty hacks. It should be possible to run programs straight from the build dir without needing to copy anything or change envvars.

During CppCon I was told that with the very latest Windows 10 it should be possible to do this "somehow" but googling for this has not uncovered any instructions.

Spawning a process using an array

The way to spawn processes in Windows is using the CreateProcess function. Note that it takes a command string, not an array. The command implementation will then parse the string to a command array and run it. The documentation page does not document how the parsing is done, but presumably it is the same as what cmd.exe does.

What this means is that it is impossible to spawn a process on Windows without needing to jump through massive quoting hoops. For example suppose you want to write a Ninja file to call a specific command. Because of this problem Ninja does not support arrays natively but instead requires every user to write a single command string, which leads to double quoting. First you need to quote the array to be a Windows process spawning command and then you need to quote that according to Ninja's quoting rules.

And then it gets terrible.

The command line length limitation on Windows is ridiculously short. Even fairly simple link commands are too long. Thus you need to write the actual command to a response file, which the command then reads and parses on its own. Since every program writes their own parsing and splitting code, you may find that you need to quote things differently depending on whether you are using the command line or a response file. You get one guess whether some (but not all) programs coming from Unix parse their response files according to Unix shell rules, even on Windows.

Now there might be a few people out there who just got outraged, because msvcrt does in fact have functions to spawn processes with arrays. They are a complete lie. Here is a rough pseudocode representation on how they are implemented:

def spawn_process(command_array):
    command_line = ' '.join(command_array)
    return CreateProcess(command_line)

Support GCC's destructor extension in plain C

RAII is awesome. It is, in fact, so awesome that GCC ships an extension to use it with plain C. It is used by many plain C projects such as GLib and systemd. I have spoken to many C developers and they really love that feature and they absolutely hate that they can't use it in code that has to support MSVC.

Adding this support would be great and make the world a better place in several ways including:
  • you can use libraries that use this feature as dependencies when building with MSVC
  • multiplatform projects can start using destructors freely
  • all the millions of lines of C code that exist in the world (and which will not be rewritten any time soon) can be made iteratively safer and more reliable
Eventually it would be nice to get this feature in the C standard, but that is unlikely to happen any time soon.

Performance optimize MSBuild

Running the test suite of Meson with the Visual Studio compiler takes roughly 6-7 minutes when using the Ninja backend and 14-18 minutes when using the MSBuild backend. Granted, this is a worst case scenario of running many small independent builds in a row, but it is still frustratingly slow. The same can be found when using Visual Studio IDE. After typing ctrl-shift-b there is usually a noticeable lag until any compilation actually starts.

Kill the need for vcvarsall.bat and provide parallel installable compilers

Visual studio compilers are not in path by default. You have to either start a special shell or run a magic bat file from a magic directory that sets up the environment so that the compilers work. If you go looking in the installed directory there are many different directories all of which contain an executable cl.exe. Which one you run depends on PATH settings, thus you can only run one compiler at a time. This makes it really difficult to, for example, run multiple different VS versions (15, 17, native, cross etc) from a single script.

This same problem has been solved on Unix side ages ago. The trick is to provide many executables with different names. For example cl15-x86.exe, cl17-arm.exe and cl17-x64.exe. Each of these executables would set up the equivalent of vcvarsall.bat for its own process and then forward the actual compilation to the compiler, wherever it may be hidden in the file system hierarchy. These binaries could the be put in one single path location and they could be used from any command prompt, even in parallel. This is particularly useful for cross compilation projects where you need to build a code generator with the native compiler and then use it to generate source code for the cross compiler.

Have you reported these as bugs upstream?

No. Nothing on this blog post is new, these are all issues that have been known for 20+ years and most likely have been reported to Microsoft dozens, if not hundreds of times. The fact that these things have not been fixed is a question of corporate priorities. As a random-non-windows-using-dude-on-internet I don't really have any influence on those.

sunnuntai 9. syyskuuta 2018

The compiler as a shared library

Since times immemorial, compilers have been run as standalone batch processes. If you have 50 files to compile, then you invoke the compiler 50 times, once on each file. Since each compilation is independent of all others, the work can be parallelised perfectly. This seems like a simple and optimal solution.

But, as is commonly the case, this is not the whole truth. When compiling code, there are many subtasks that are common to each individual compilation and this causes a lot of duplication of effort. Perhaps the best known case of this are C++ templates. They are parsed and codegenerated for each file that uses them yielding in the same code in dozens of files. Then the linker comes along and throws all but one of them away. There are a bunch of other issues which are discussed in this video from LLVM developer's conference:

A problem of state preservation

One of the best known solution to this problem are precompiled headers. They work roughly like this:
  1. Parse the contents of headers
  2. Dump compiler internal state to a file
  3. Load the file on each compiler invocation
The two main problems with this is that it requires someone to design and implement a full serialisation format for the compiler-internal data. That is a lot of tedious work that very few people will volunteer to do. The other downside is that the files need to be loaded explicitly from disk in every compilation process, which takes time, and that the build system needs to tell the compiler how to get this done. The granularity is also fairly coarse.

Ideally we would like to preserve as much data between two compiler invocations as possible without needing to serialise it to disk. As discussed in the above video, one solution is to have a "compiler plugin".

Almost every build system currently works roughly like this:
  1. Read build definition (such as a Ninja file)
  2. For each compilation, spawn a new compiler process and invoke the compiler executable
  3. Shutdown
The proposed new model would go like this (no build system currently supports this, but adding it to e.g. Ninja is not a massive undertaking):
  1. Read build definition
  2. dlopen the compiler shared library file
  3. For each compilation, create a new compiler object and invoke compilation using e.g. a thread pool
  4. Destroy compiler objects and dclose the file
  5. Shutdown
In this model all compilation jobs live in the same process, thus they can coordinate work behind the scenes however they wish. This requires some tricky code with thread safe caches and the like but it all internal to the compiler and never exposed. Even without caching this makes a difference on platforms such as Windows where process spawning is slow.

The big question remaining here is the API to use. It should have the following requirements:
  1. Must be ABI stable in the C sense
  2. Must be supportable on all compilers for all languages
  3. Must expose the full functionality of the compiler
  4. Must support an arbitrary number of compiler tasks within a single process

An API proposal for compiler invocation

On the face of it this seems like an impossible task. The API surface of a compiler is enormous and differs from compiler to compiler. However all of them already expose a stable ABI: the command line argument arrays. Exploiting this allows us to create an API supporting all of the requirements above with only six functions.

First we initialise the library:

CompilerService* compiler_init_service();

Here CompilerService is an opaque struct to a state object. There is one of these per process and it holds (internally) all the cached state and related things. Then we create a compiler object, one per compilation task:

Compiler* compiler_create_compiler(CompilerService *service);

Now we can invoke the compilation:

CompilationResult* compiler_compile(Compiler *c, int argc, const char **argv);

This invocation matches the signature of the main function. Since we are not going through the shell/kernel we can pass an arbitrary number of arguments without needing to use response files, quote shell characters or any other nastiness. The return value contains the return code and the strings for stdout and stderr. The standalone compiler executable such as cl.exe could (in theory ;-) be implemented by just calling these functions and returning the results to the calling process.

The last thing we need are the deallocation functions:

void compiler_free_compilation_result(CompilationResult *r);
void compiler_free_compiler(Compiler *c);
void compiler_free_service(CompilerService *s);

When will this be available in <my favorite compiler>?

Probably not soon, this is all slideware. There is no actual code to implement this (that I know of at least). The big problem here is that most compilers have not been written with this sort of usage in mind. The have global variables and other things hostile to usage as a shared library. Fixing all that to be thread safe and isolated is a lot of work. LLVM is probably the compiler that could most easily get this done since it has been designed to be used as a library from the beginning.

lauantai 18. elokuuta 2018

Linker symbol lookup order does not work the way you think

A common problem in linking problems has to do with circular dependencies. Suppose you have a program that looks like this:



Here program calls into function one, which is in library A. That calls into function two, which is in library B. Finally that calls into function three, which is back in library A again.

Let's assume that we use the following linker line to build the final executable:

gcc -o program prog.o liba.a libb.a

Because linkers were originally designed in the 70s, they are optimized for minimal resource usage. In this particular case the linker will first process the object file and then library A. It will detect that function one is used so it will take that function's implementation and then throw the rest of library A away. It will then process library B, take function two in the final program and note that function three is also needed. Because library A was thrown away the linker can not find three and errors out. The fix to this is to specify A twice on the command line.

This is how everyone has been told things work and if you search the Internet you will find many pages explaining this and how to set it up linker command lines correctly.

But is this what actually happens?

Let's start with Visual Studio

Suppose you were to do this in Visual Studio. What do you think would happen? There are four different possiblities:
  1. Linking fails with missing symbol three.
  2. Linking succeeds and program works.
  3. Either 1. or 2. happens, but there is not enough information to tell which.
  4. Linking succeeds but the final executable does not run.
The correct answer is 2. Visual Studio's linker is smart, keeps all specified libraries open and uses them to resolve symbols. This means that you don't have to add any library on the command line twice.

Onwards to macOS

Here we have the same question as above but using macOS's default LLD linker. The choices are also the same as above.

The correct answer is also 2. LLD keeps symbols around just like Visual Studio.

What about Linux?

What happens if you do the same thing on Linux using the default GNU linker? The choices are again the same as above.

Most people would probably guess that the correct answer here is 1. But it's not. What actually happens is 3. That is, the linking can either succeed or fail depending on external circumstances.

The difference here is whether functions one and three are defined in the same source file (and thus end up in the same object file) or not. If they are in the same source file, then linking will succeed and if they are in separate files, then it fails. This would indicate that the internal implementation of GNU ld does not work at the symbol level but instead just copies object files out from the AR archive wholesale if any of their symbols are used.

What does this mean?

For example it means that if you build your targets with unity builds, their entire symbol resolution logic changes. This is probably quite rare but can be extremely confusing when it happens. You might also have a fully working build, which breaks if you move a function from one file to another. This is a thing that really should not happen but when it does things get very confusing.

The bigger issue here is that symbol resolution works differently on different platforms. Normally this should not be an issue because symbol names must be unique (or they must be weak symbols but let's not go there) or the behaviour is undefined. It does, however, place a big burden on cross platform projects and build systems because you need to have very complex logic in place if you wish to deduplicate linker flags. This is a fairly common occurrance even if you don't have circular dependencies. For example when building GStreamer with Meson some time ago the undeduplicated linker line contained hundreds duplicated library entries (it still does but not nearly as many).

The best possible solution would be if GNU ld started behaving the same way as VS linker and LLD. That way all major platforms would behave the same and things would get a lot simpler. In the mean time one should be able to simulate this with linker grouping flags:
  1. Go through all linker arguments and split them to libraries that use link_whole and those that don't. Throw away any existing linker grouping.
  2. Deduplicate and put the former at the beginning of the link line with the requisite link_full arguments.
  3. Deduplicate all entries in the list of libraries that don't get linked fully.
  4. Put the result of 3 on the command line in a single linker group.
This should work and would match fairly accurately what VS and LLD already do, so at least all cross platform projects should work out of the box already.

What about other platforms?

The code is here, feel free to try it out yourself.