Thursday, December 31, 2015

Are exceptions slower than error objects

One stated reason for using error objects/error codes instead of exceptions is that they are faster. The real question, then, are they. I wrote a bunch of C and C++ test applications and put them up on Github.

The tests are simple. They call a function that either does nothing or throws an exception/returns an error object depending on its argument in a tight loop. The implementation is put in a separate file and LTO is not used so the compiler can't inline the code and thus optimize away the checks.

The error objects used mimic what GLib's GError does. That is, it contains a dynamically allocated string describing the error (a dummy string in our case). On all error cases the created error object needs to be deallocated properly.

A sample run on OSX looks like this (output directly from Meson's ninja benchmark command):

  1/6 exceptions, not thrown                  OK      0.00520 s +- 0.00009 s
  2/6 exceptions, thrown                      OK      1.65206 s +- 0.01488 s
  3/6 error object, not created               OK      0.00563 s +- 0.00017 s
  4/6 error object, created                   OK      0.19703 s +- 0.00769 s
  5/6 5/loop, exceptions, no throws           OK      0.00528 s +- 0.00017 s

  6/6 5/loop, error object, no errors         OK      0.00669 s +- 0.00072 s

Here we see that when errors happen exceptions (case 2) are 10x slower than error objects (case 4). We can also see that when errors do not happen, exceptions (case 1) are measurably faster than exception objects (case 3).

Cases 5 and 6 are the same as 1 and 3, respectively, except that they call the function five times in a single loop iteration. This does not change the end result, exceptions are still a bit faster.

I ran this test on a bunch of machines and compilers, such as OSX, Linux, Windows, gcc, clang, vs, Raspberry Pi and so on and the results were consistent. When not thrown, exceptions were never slower and usually were a bit faster than error objects. The only exception was MinGW on Windows XP, where exceptions were noticeably slower even when not thrown. Thrown exceptions were always roughly 10x slower than error objects.

The reason for this is (probably, I haven't looked at the resulting assembly) the fact that the error object code always has a branch operation that slows down the CPU. Exceptions have an optimization for this case which avoids the need for a comparison.

Pulling all that together we can ask whether you should use exceptions in high performance code or not. And the answer is maybe. If you have a case where exceptions are rare, then they might very well be faster. This is very data dependent so if this an issue that really matters to you, run your own benchmarks and make the decision based on measurements rather than wild guessing.

Monday, December 21, 2015

Always use / for directory separation

It is common wisdom that '\' is the path separator on Windows and '/' is the separator on unixlike operating systems. It is also common wisdom that you should always use the native path separator in your programs to minimise errors.

The first one of these is flat out wrong and the second one can be questioned.

On Windows both '\' and '/' work as directory separators. You can even mix them within one path. The end result is that '/' is the truly universal directory separator, it works everywhere (except maybe classic Mac OS, which is irrelevant).

So the question then is whether you should use '\' or '/' when dealing with paths on Windows. I highly recommend always using '/'. The reason for this is a bit unexpected but makes perfect sense once you get it.

The difference between these two characters is that '\' is a quoting character whereas '/' is not. You probably have to pass your path strings through many systems (bat files, persist to disk, files, sql, etc) using libraries and frameworks outside your control. They will most likely have bugs in the way they handle quotes. You may need to quote your quote chars to get them to work. You might need to quote them differently in different parts of your program. Quoting issues are the types of bugs that strike you at random, usually in production and are really difficult to hunt down.

So always use '/' if possible. It will survive any serialisation/deserialisation combination without problems since '/' is not a special character.

Friday, November 27, 2015

Build speed comparison on the Raspberry Pi 2

A common misunderstanding is that all build systems are roughly as fast. This is not the case, but instead the differences can be dramatical, especially on low end hardware such as the Raspberry Pi. To demonstrate I did some benchmarks. The code can be found at github.

The program is a network debugging tool, but we don't care what it actually does. The reason it is used in this test is that it is a nice standalone executable implemented in Qt5 so compiling it takes a fair bit of CPU and it should be a fairly good example of real world applications.

The repository contains build definitions for Meson and CMake. Since the latter is currently more popular let's use it as a baseline. Compiling the application on the Raspberry Pi 2 with the Ninja backend takes 2 minutes 20 seconds.

Compiling the same project with Meson takes 1 minute and 36 seconds. That is over 30% faster. The reason for this is that Meson has native support for precompiled headers. CMake does not have this feature and, according to CMake's bugzilla, it will not have it in the future, either.

But let's not stop there. Meson also has native support for unity builds. The core principle is simple. Instead of compiling all files one by one, create a top level file that #includes all of them and compile only that. This can make things dramatically faster when you compile everything, but the downside is that it slows down incremental builds quite a lot.

A Meson unity build of the test application takes only 39 seconds, which is over 70% faster than plain CMake.

With this simple experiment we find that a build system can have a massive impact on your project with zero code changes.

Sunday, November 15, 2015

Solving the C/C++ dependency problem through build system composability

The perennial problem in C and C++ development is dealing with dependencies. It is relatively straightforward on Linux where you have package managers (assuming your dependency is packaged in the distro you are using) but becomes a whole lot of work the second you step outside this zone. To develop native Windows applications, for example, the best solution people have come up with is copying the source code of dependencies to your own project and hoping for the best. Unfortunately wishful thinking is not a particularly efficient engineering method.

The main reason why dependency management is simple on Linux is a program called pkg-config. It is really simple, as most good solutions are, and basically just says that "to use dependency foo, use these flags to compile and these other flags to link". This makes all libraries that provide pkg-config definitions composable. That is, you can use them in a very simple way without knowing anything about their internals and you can mix and match them in any combination with any other library that provides a pkg-config definition.

It would be really nice if you could use pkg-config everywhere but unfortunately this is not possible. In order to do pkg-config really well, the support for it must come from the basic platform. That is, from the Visual Studios and XCodes out there. They have not provided such support at the moment and thus support is unlikely to appear in the near future. There are also some technical hurdles, especially on Windows with its multitude of incompatible binary ABIs.

Since we can't have pkg-config proper, is it possible to achieve the same result with a different kind of mechanism? It turns out that this is indeed possible and has been under development for quite a while in the Meson build system and its Wrap dependency system. The implementation details are slightly involved so for the purposes of this article the approach taken has been reduced to two main points.

The first one is that you can take any project that uses the Meson build system and then run it inside another Meson project. The subproject runs in a sandbox but the master project can use artifacts from it as if they were a native part of the master project. And yes, this can even be done recursively so subprojects can also use subprojects, but it goes even deeper than that. If two projects use the same subproject, the system guarantees that they both use the same subproject instance so it is compiled only once.

The second point is what you might call an "internal pkg-config". It encapsulates the same information as pkg-config proper, that is, what compiler flags to use and what libraries to link against.

With these two piece we have achieved the same level of composability as pkg-config proper without using pkg-config itself. We can now take our dependency projects and use them inside any other project on any platform because we do not depend on any piece of system infrastructure any more. We can choose to build and link all our dependencies statically for platforms such as iOS that require this. There is even a repository of Meson build definitions for open source libraries that you can easily use on your projects. There are not a whole lot of projects available yet, but anyone is free to submit more.

Talk is cheap, show me the code!

As a sample project we created a simple app using the SDL 2 library. It animates some images on the screen and plays sound effects when keys are pressed. It embeds all its images and sounds and statically links all its dependencies producing a single standalone exe file. Here is a screen shot of it running on OSX.

This project is available on github. The most interesting bit about it is probably the build definition file. The whole definition, which takes care of embedding SDL2, converting resources into C++ source, setting up all build flags for OSX, Windows, Linux etc, contains 33 lines of code, of which only three lines are dedicated to setting up the subproject (see lines 6-8).

If you were to build the same application with currently established methods, you would need to spend a fair bit of time downloading dependencies for each target platform, installing them, setting up build flags, debugging weird issues since you got something wrong and all that jazz. Using Meson you can just write a few declarations and let the build system deal with all the boring legwork including downloading and extracting all dependency projects.

That is the power of composability.

Wednesday, November 4, 2015

A difference in complexity

A common task in build systems is to run a custom command. As an example you might want to do this to regenerate a the build definition or run a tool such as clang-format. Different backends have different ways of specifying this.

Let's look at the difference between two such systems to see if we can learn something about design, simplification and other such things. First we examine the way you would define this in the Ninja build system. It looks roughly like this.

build mycommand: CUSTOM_COMMAND PHONY
 COMMAND = do_something.py
 description = 'Doing something'


All right. This seems reasonable. Now let us see how you would do the exact same thing in MSBuild.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 <ItemGroup Label="ProjectConfigurations">
   <ProjectConfiguration Include="Debug|Win32">
     <Configuration>debug</Configuration>
     <Platform>Win32</Platform>
   </ProjectConfiguration>
 </ItemGroup>
 <PropertyGroup Label="Globals">
   <ProjectGuid>{1C3873C1-1796-4A8A-A7B2-CA9FA6068A1A}</ProjectGuid>
   <Keyword>Win32Proj</Keyword>
   <Platform>Win32</Platform>
   <ProjectName>something</ProjectName>
 </PropertyGroup>
 <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
 <PropertyGroup Label="Configuration">
   <ConfigurationType />
   <CharacterSet>MultiByte</CharacterSet>
   <UseOfMfc>false</UseOfMfc>
 </PropertyGroup>
 <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
 <PropertyGroup>
   <_ProjectFileVersion>10.0.30319.1</_ProjectFileVersion>
   <OutDir>.\</OutDir>
   <IntDir>test-temp\</IntDir>
   <TargetName>something</TargetName>
 </PropertyGroup>
 <ItemDefinitionGroup>
   <Midl>
     <AdditionalIncludeDirectories>%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
     <OutputDirectory>$(IntDir)</OutputDirectory>
     <HeaderFileName>%(Filename).h</HeaderFileName>
     <TypeLibraryName>%(Filename).tlb</TypeLibraryName>
     <InterfaceIdentifierFilename>%(Filename)_i.c</InterfaceIdentifierFilename>
     <ProxyFileName>%(Filename)_p.c</ProxyFileName>
   </Midl>
   <PostBuildEvent>
     <Message />
     <Command>setlocal
"c:\Python34\python3.exe" do_something.py
if %errorlevel% neq 0 goto :cmEnd
:cmEnd
endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone
:cmErrorLevel
exit /b %1
:cmDone
if %errorlevel% neq 0 goto :VCEnd</Command>
   </PostBuildEvent>
 </ItemDefinitionGroup>
 <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
</Project>


Here endeth the lesson.

Saturday, October 31, 2015

Shell script or Python?

Every now and then a discussion arises on whether some piece of scripting functionality should be written as a basic shell script or in Python. Here is a handy step-by-step guide to answering the question. Start at the beginning and work your way down until you have your answer.

Will the script be longer than one screenful of text (80 lines)? If yes ⇒ Python.

Will the script ever need to be run on a non-unix platform such as Windows (including Cygwin)? If yes ⇒ Python.

Does the script require command line arguments? If yes ⇒ Python.

Will the script do any networking (e.g. call curl or wget)? If yes ⇒ Python.

Will the script use any functionality that is in Python standard library but is not available as a preinstalled binary absolutely everywhere (e.g. compressing with 7zip)? If yes ⇒ Python.

Will the script use any control flow (functions/while/if/etc)? If yes ⇒ Python.

Will the script have modifications in the future rather than being written once and then dropped? If yes ⇒ Python.

Will the script be edited by more than one person? If yes ⇒ Python.

Does the script call into any non-trivial Unix helper tool (awk, sed, etc)? If yes ⇒ Python.

Does the script call into any tool whose behaviour is different on different platforms (e.g. mkdir -p, some builtin shell functionality)? If yes ⇒ Python.

Does the script need to do anything in parallel? If yes ⇒ Python.

If none of the above match then you have a script that proceeds in a straight line calling standard and simple Unix commands and never needs to run on a non-unix platform. In this case a shell script might be ok. But you might still consider doing it in Python, because scripts have a tendency to become more and more complex with time.

Friday, October 30, 2015

Proposal for launching standalone apps on Linux

One nagging issue Linux users often face is launching third party applications, that is, those that do not come from package repositories. As an example the Pitivi video editor provides daily bundles that you can just download and run . Starting it takes a surprising amount of effort and may require going to the command line (which is an absolute no-no if your target audience includes regular people).

This page outlines a simple, distro agnostic proposal for dealing with the problem. It is modelled after OSX application bundles (which are really just directories with some metadata) and only requires changes to file managers. It is also a requirement that the applications must be installable and runnable without any priviledge escalation (i.e. does not require root).

The core concept is an application bundle. The definition of an app bundle named foobar is a subdirectory named foobar.app which contains a file called foobar.desktop. There are no other requirements for the bundle and it may contain arbitrary files and directories.

When a file manager notices a directory that matches the above two requirements, and thus is a bundle, it must treat it as an application. This means the following:


  • it must display it as an application using the icon specified in the desktop file rather than as a subdirectory
  • when double clicked, it must launch the app as specified by the desktop file rather than showing the contents of the subdir
These two are the only requirements to get a working OSX-like app bundle going on Linux. There can of course be more integration, such as offering to install an app to ~/Applications when opening a zip file with an app bundle or registering the desktop file to the system wide application list (such as the dash on Ubuntu).

Monday, October 12, 2015

Some comments on the C++ modules talk

Gabriel Dos Reis had a presentation on C++ modules at cppcon. The video is available here. I highly recommend that you watch it, it's good stuff.

However as a build system developer, one thing caught my eye. The way modules are used (at around 40 minutes in the presentation) has a nasty quirk. The basic approach is that you have a source file foo.cpp, which defines a module Foobar. To compile this you should say this:

cl -c /module foo.cxx

The causes the compiler to output foo.o as well as Foobar.ifc, which contains the binary definition of the module. To use this you would compile a second source file like this:

cl -c baz.cpp /module:reference Foobar.ifc

This is basically the same way that Fortran does its modules and it suffers from the same problem which makes life miserable for build system developers.

There are two main reasons. One: the name of the ifc file can not be known beforehand without scanning the contents of source files. The second is that you can't know what filename to give to the second command line without scanning it to see what imports it uses _and_ scanning potentially every source file in your project to find out what file actually provides it.

Most modern build systems work in two phases. First you parse the build definition and determine how and which order to do individual build steps in. Basically it just serialises the dependency DAG to disk. The second phase loads the DAG, checks its status and takes all the steps necessary to bring the build up to date.

The first phase of the two takes a lot more effort and is usually much slower than the second part. A typical ratio for a medium project is that first phase takes roughly ten seconds of CPU time and the second step takes a fraction of a second. In contemporary C++ almost all code changes only require rerunning the second step, whereas changing build config (adding new targets etc) requires doing the first step as well.

This is caused by the fact that output files and their names are fully knowable without looking at the contents of the source files. With the proposed scheme this no longer is the case. A simple (if slightly pathological) example should clarify the issue.

Suppose you have file A that defines a module and file B that uses it. You compile A first and then B. Now change the source code so that the module definition goes to B and A uses it. How would a build system know that it needs to compile B first and only then A?

The answer is that without scanning the contents of A and B before running the compiler this is impossible. This means that to get reliable builds either all build systems need to grow a full C++ parser or all C++ compilers must grow a full build system. Neither of these is particularly desirable. Even if build systems got these parsers they would need to reparse the source of all changed files before starting the compiler and it would need to change the compiler arguments to use. This makes every rebuild take the slow path of step one instead of the fast step two.

Potential solutions


There are a few possible solutions to this problems, none of which are perfect. The first is the requirement that module Foo must be defined in a source file Foo.cpp. This makes everything deterministic again but has that iffy Java feeling about it. The second option is to define the module in a "header-like" file rather than in source code. Thus a foo.h file would become foo.ifc and the compiler could pick that up automatically instead of the .h file.

Friday, September 25, 2015

Linux ABI compatibility even reduxer

My previous blog post raised some discussion on Reddit. There were two major issues raised. The first one is that a helloworld application is not much of a test. The other was that because the distro was so new, the test does not validate that backwards compatibility goes back 16 years, but only about 6.

Very well.



I installed Red Hat Linux 6.2, which was originally published in 2000. Then I downloaded the source code of Gimp 1.0, compiled it and embedded its dependencies, basically glib + gtk, into an app bundle. Stable core libraries such as libjpeg, X libraries and glibc were not bundled. I then copied this bundle to a brand new Ubuntu install and ran it.



It worked without a hitch.

What have we learned from this? Basically that it was possible in the year 2000 to create binary only applications for Linux that would work 15 years later on a different distro.

If you follow the exact same steps today, it is presumable that you can create binary apps that will run without any changes in the year 2030.

Thursday, September 24, 2015

Linux applications backwards compatibility redux

As we all know Linux applications are tied to the distro they have been built on. The do not work on other distros and even on later releases of the same distro. Anyone who has tried to run old binaries knows this. It is the commonly accepted truth.

But is it the actual truth?

In preparation for my LCA2016 presentation I decided to test this. I took Debian Lenny from 2009 and installed it in VirtualBox. Lenny is the oldest version that would work, the older ones would fail because distro mirrors no longer carry their packages. Then I downloaded GTK+ 1.2 from the previous millennium (1999). I built and installed it and GLib 1 into a standalone directory and finally compiled a helloworld application and a launcher script and put them in the same dir.

This directory formed an application bundle that is almost identical to OSX app bundles. The main difference to a distro package is that it embeds all its dependencies rather than resorting to distro packages. This is exactly how one would have produced a standalone binary at the time. I copied the package to a brand new Fedora install and launched it. This is the result.
This is a standard Fedora install running Gnome 3. The gray box in the middle is the GTK+ 1.2 application. It ran with no changes whatsoever. This is even more amazing when you consider that this means a backwards compatibility time span of over 15 years, over two completely different Linux distributions and even CPU architectures (Lenny was x86 whereas Fedora is x86_64).

Tuesday, August 18, 2015

Proposal for a dependency security scheme for sandboxed apps

Perhaps the main disadvantage of embedding external projects as source code rather than using distro packages is the loss of security. Linux distributions have lots of people working on keeping distro packages up to date and safe. With embedded source code this is no longer the case. The developer is in charge of keeping the application up to date. Some developers are better at it than others.

What makes this worse is the fact that if you embed (and especially statically link) your dependencies it is impossible to know what versions of which libraries you are using. If this information were available, then the host operating system could verify the list of embedded dependencies against a known white- or blacklist. The packaging format simply does not have this information.

So let's put it there.

A merge proposal has just recently been proposed to Meson. This makes it create (and optionally install) a dependency manifest for each generated binary. This manifest is simply a JSON file that lists all the embedded dependencies that each given binary uses. Its proposed format looks like this.

{
  "type": "dependency manifest",
  "version": 1,
  "dependencies":
  {
    "entity": "1.0"
  }
}

In this case the executable has only one dependency, the project entity version 1.0. Other such dependencies could include zlib version 1.2.8 or openssl version 1.0.2d. The project names and releases would mirror upstream releases. This manifest would make it easy to guard against binaries that have embedded unsafe versions of their dependencies.

But wait, it gets better.

All dependencies that are provided by the Wrap database would (eventually ;) expose this information and thus would generate the manifest automatically. The developer does not need to do anything to get it built, only to say he wants to have it installed. It is simply a byproduct of using the dependency.

As the Linux application installation experience keeps moving away from distro packages and towards things such as xdg-app, snappy and the like, the need to increase security becomes ever trickier. This is one such proposal that is already working and testable today. Hopefully it will see adoption among the community.

Single question mini-faq

What happens if someone creates a fraudulent or empty manifest file for their app?

The exact same thing as now when there is no manifest info of any kind. This scheme is not meant to protect against Macchiavelli, only against Murphy.

Wednesday, August 5, 2015

Make your tool programs take file name arguments

There are a lot of utility programs for text manipulation, scanning and so on. Often they are written as filters so you can use them with shell redirection like this.

dosomething < infile.txt > outfile.txt

There's nothing wrong with this as such, but the problem is that this causes problems when you invoke the program from somewhere else than a unix shell prompt. As an example if you need to invoke it from a compiled binary, things get very complex as you need to juggle with pipes and other stuff.

Because of this you should always make it possible to specify inputs and outputs as plain files. For new programs this is simple. This is a bigger problem for existing programs that already have hundreds of lines of code that read and write directly to stdout and stdin. Refactoring it to use files might be a lot of work for little visible gain.

No matter, the C standard library has you covered. It has a method called freopen that opens a file and replaces an existing file descriptor with it. To forward stdout and stdin to files you just need to do this at the beginning of your program:

freopen(ifilename, "r", stdin);
freopen(ofilename, "w", stdout);

Sunday, August 2, 2015

Not handling filenames with spaces should be classified a fatal bug

A lot of programs (even commonly used ones) fail miserably if you try to give them file names with spaces in them. The most common way to fail is to pass filenames to the shell unquoted. When you try to make people fix these issues you usually get the same response:

Not fixing, just rename all your files and dirs to not have spaces in them.

This is both understandable and totally misguided. The point of failing on spaces in filenames is not about those files. It's about not properly sanitizing your input and output. To see why this would be a problem, just imagine what would happen if you passed the following as a filename to a program that will use it in a shell command invocation:

; rm -rf /;

Because of this every case of an application failing with spaces in file names should be classified to the same severity level as an SQL injection vulnerability.

Saturday, July 25, 2015

Running Linux installs with a different CPU

Every now and then you might need a Linux distribution for a CPU that you do not have. As an example you might want to run an ARM distro on an x86_64 machine. This is not only possible but quite simple. Here's how you would do it on Ubuntu or Debian.

First install the qemu-user-static package and create a subdirectory to hold your distro install. Then run the first stage install. Using Debian sid as an example:

sudo debootstrap --arch=armhf --foreign sid sid-arm

Once that has finished, copy the qemu binary inside this install. This makes it possible to run ARM binaries transparently on an Intel CPU.

sudo cp /usr/bin/qemu-arm-static sid-arm/usr/bin

Then finish the bootstrap operation.

sudo chroot sid-arm /debootstrap/debootstrap --second-stage

And now we are done. Chrooting in sid-arm and running uname -a prints this:

Linux userland 4.1.0-2-generic #2-Ubuntu SMP Wed Jul 22 18:19:08 UTC 2015 armv7l GNU/Linux

At this point you can use the chroot as if it were a native installation. You can install packages with apt, compile stuff with gcc and do anything else you might prefer. In theory you could even run it as a container with systemd-nspawn, but unfortunately qemu is missing some functionality to do that. The bug report is here. The only downside of this setup is that because the CPU is emulated, all binaries run a bit slow.

Tuesday, July 21, 2015

List of achievements for an open source project

- [ ] first commit
- [ ] making the code public
- [ ] first checkout by someone else
- [ ] first user
- [ ] first bug report
- [ ] first mention in a blog post by someone you don't know
- [ ] first submitted patch
- [ ] approval to Debian/Fedora
- [ ] a magazine article mentioning your project
- [ ] a magazine article about your project
- [ ] first conference presentation
- [ ] a project subreddit is created
- [ ] a conference presentation about your project held by someone you don't know
- [ ] a bug report/patch from Linus Torvalds/someone else well known
- [ ] RMS claims your project is anti-freedom
- [ ] the project gets forked
- [ ] an O'Reilly animal book gets published
- [ ] a conference wholly about your project
- [ ] someone reimplements your project in a new language in 1/10th of the time it took to write the original one

Wednesday, July 15, 2015

Strange requirements of low level packages

As some of you might know, I'm the main developer of the Meson build system. It is implemented in Python 3 but has strict requirements on dependencies. Every now and then this confuses people so here's an explanation why we do things slightly unconventionally.

One of the greatest things about modern development environments is the number of third party packages and how easy they are to use. This is why Meson has spent a lot of effort in creating a simple way to get dependencies automatically on multiple platforms. This system is called Wrap, but this text is not about that. Instead it's about the lack of dependencies in Meson itself. The policy is that in order to run Meson and its test suite, you are only allowed to depend on Python standard library. No external dependencies are allowed. For non-core stuff (coverage, static analysis) external dependencies are ok.

This seems a bit strange and has caused us to have to reinvent some wheels. There are valid reasons for this, though, which have to do with the fact that packages at the lowest levels of the software stack have requirements that regular applications don't. The most important of these is that a build system needs to work in very limited distros and environments.

Let's look at Mer project for an example. It aims to provide an extremely slimmed down Linux distribution for use in mobile applications. They support only a small number of packages by design. There are only 8 Python packages in total. Six are extension packages and those are not useful for a build system. If we wanted to get Meson there, what would it actually mean?

If Meson depends on any other package, it can not be accepted into Mer. Since source bundling is prohibited and the system does not allow net access during package creation, it just can't be done. Either you disable the part that requires the extension or you don't get your package accepted. You might get an exception but you need to apply for it and it takes time an effort for many people. If only the unit test system depends on external packages you can disable it but then you can't tell if your project is working correctly. This is not acceptable either.

There are a bunch of distros like this. Then there are the BSDs that might not have your dependencies in their repos. Rather than force them all to deal with this dependency issue we instead write our code so that it has no external dependencies. This means more work for us but less for everyone else. Luckily Python 3's standard library is very good and has almost everything we need.

Another use case that many people don't think about is that a lot of development is done on machines that neither have Internet access nor the capability to install additional software. Several corporations have requirements such as these for real or imagined security reasons. Even some OSS development is done like this. Requesting permission to use a new dependency can take six weeks.

I'm not saying that this is right or wrong or even sane. What I'm saying is that this is reality and that you ignore reality at your own peril.

In an environment like this getting Meson approved for use takes only one round of bureaucracy. Every dependency requires an additional round (if you are lucky). To reduce friction and ease uptake, we must again cut down on our dependencies. The easier Meson is to deploy, the more people are willing to switch to it.

There are other advantages, too. It is easier to get contributions from Windows and OSX developers since they can hack on the code without any additional steps. You'd be surprised how tough it can be to download Python dependencies, even with setuptools. Deployment is easier, too: "Download Python 3 and Meson, run" as opposed to "download Python 3 and Meson, use setuptool to move it places, then install these dependencies and then set up these paths and ...".

Because of these reasons we don't use external dependencies. This is the price you pay for being at the very bottom of the software stack.

Tuesday, July 14, 2015

The fable of mashed potatoes

Once upon a time in a faraway land in a kingdom much like ours lived a carpenter. Every day he worked for a small company with his fellow carpenters. This company produced book shelves. The recent increase in population literacy rates had made books very popular so there was a growing need for bookshelves.

The bookshelves they manufactured were quite popular, and their customers really seemed to appreciate their quality. Obviously making bookshelves was not rocket science, mostly because rocketry had not been invented yet, but also because it was all in all a fairly well understood problem. The company's success made it possible for them to hire a few more carpenters. And then a few more.

Eventually someone in the management chain started wondering about how to make things better. Faster. More organised. So he went looking for a bookshelf expert to lead this voyage towards unknown bookshelf frontiers. Eventually such a person was found, hired and initiated. Soon thereafter he gathered all of the carpenters of the company into one room, for he had an announcement to make.

The bookshelf expert gathered everyone around him in a tight circle. He was giddy and excited to spring forth his major plan. All the carpenters were excited as well, there was electricity in the air. Then the expert finally started talking. First he talked about the past, then the present and finally came time for the future of all of storage space.

- From now on, we shall make all of our bookshelves ...

Every carpenter in the room leaned forward. They held their breath.

- ... out of mashed potatoes!

They kept holding held their breath, but for a different reason.

No-one can really remember how the rest of the talk went. One imagines the expert talked about the glorious future this would bring about. Then he congratulated everyone and left, presumably for a meeting with other mashed potatoes furniture specialists.

Our friend the carpenter felt puzzled. He wondered about the workshop for a while and then talked to his boss. He was frankly completely baffled. You couldn't even make a pencil stand out of mashed potatoes, let alone a bookshelf.

- He's a world class expert on potatoes, the boss said, he knows what is best so we should be doing what he says.

- What? This has nothing to do with potatoes at all. There's no way to make a bookshelf out of mashed potatoes, for starters it won't hold any weight, and also ...

The boss waved his hand at the carpenter.

- Have you ever tried doing a bookshelf out of mashed potatoes? I mean actually, really in practice?

- No, but, the carpenter tried to continue before being interrupted

- Well that's a very negative attitude to be having. Why are you being so negative? It affects poorly on your co-workers. Look around you, they are all working on their next projects and getting shelves done.

The carpenter tried to think of something to say, but the sheer absurdity of the situation got the better of him.

- Well back to work, we have products to ship, the manager said and left the workshop.

The sun went down and came back up the following day. Our carpenter had decided to make the best of this situation. Surely, he thought, someone would notice the flaw inherent in a mashed potato based storage unit. He didn't have long to wait. Soon the expert tapped him on the shoulder and asked him to come to the next room.

- I have been told, he said in a slightly judgmental tone, that you have raised some issues with my new plan.

The carpenter tried to follow up but was cut short.

- Apparently you have been telling people that my potatoes have mildew.

- Err, what? I have said nothing of the kind.

- I will have you know that all of these potatoes are the finest in the world. I have grown them myself with my own two hands. I have won several awards for potato farming.

- I'm sure you have, but ...

The expert cut him off again. He would not have any dissenting voices here. He went on explaining in great length how potatoes are the best nutritional source, how much time and effort he had put in each single potato and a blurry of other potato related facts.

- Now I'm going to let this infraction against potatoes go unnoticed, but do remember that a negative attitude such as yours will cause problems sooner or later. I highly recommend you fix it. Now, back to work.

And to work he went, among with all the rest of the team. They started producing mashed potato bookshelfs as planned and even scrapped already made but not yet delivered wooden shelves for new potatoe'd ones. And they were terrible. Upon seeing them their customers immediately demanded their money back. Some shelves did make it into customers apartments where they did not hold books but mostly just stank and attracted flies. Which caused the customers to demand their money back with interest.

As all sources of income dried up and management refusing to go back to wood based shelves (because "we have spent so much on this technology" and because "going back now would make us look weak"), it did not take long for the company to shut down. When the workshop was being cleaned for the last time, the carpenter's boss spoke with a sad timbre in his voice.

- Everything was going so well and now we have nothing. How can this be?

The carpenter had kept his opinions under his hat since the confrontation but as there was nothing left to lose, he figured he'd make one last appeal to sanity.

- Because making bookshelves out of mashed potatoes is a massively stupid idea that can not possibly work. Even a child can see that immediately. Remember how I tried to tell you this when things started?

The boss straightened his back and squinted. For a split second you could see tiny sparks shooting from his eyes.

- That's exactly it! You never gave our new strategy a chance! You had it doomed before it even started and then worked actively against it! Your negative attitude is the reason we are now all out of a job. You destroyed the livelihood of dozens of people! Are you proud of yourself?

The carpenter was not. He had failed. Miserably.

Failed to explain to a grown man that making a bookshelf out of mashed potatoes was a terrible idea.

Friday, July 3, 2015

How many is too many?

Trivia question: when you run a random configure scripts, how many processes does it spawn?

Before reading further, try to think of the answer by yourself. Just do it. Pick a number you think is the correct answer.

Measuring this is simple. You can do it with the following command:

strace -c -f -e trace=fork,vfork,clone,execve bash -c ./configure

Different configure scripts have different results, of course. I ran this on the configure script of RPM, which I happened to have handy.

And the correct answer is ...

... 6611.

Aren't you glad that processes are cheap?

Thursday, June 25, 2015

Simple structure of arrays with C++ templates

One feature that has been getting more popular recently is called the structure of arrays. As an example Jonathan Blow's new Jai programming language should have built in support for that. The basic principle is very simple. Instead of having this:

struct Foo {
  int x;
  std::string name;
  FooObj bar;
};


std::vector<Foo> items;

You'd want to have something like this:

struct Items {
  std::vector<int> x;
  std::vector<std::string> name;
  std::vector<FooObj> bar;

  struct Foo& operator[](size_t index);
};

The reason for this is better cache locality. First of all you don't get padding holes between items. The second issue is that usually when you doing operations on elements of these arrays you only touch a subset of items. This data layout reduces cache misses as you can skip a large portion of data.

In C++ terms you would like to have a declaration like the following:

Soa<int, std::string, FooObj> items;

This would then create a data structure like the one discussed above with all basic operations such as push_back, capacity and so on. Then you could do things like items[0].name = "hello" or std::sort(items.begin(), items.end(), mypredicate).

C++ does not support this natively but with templates you can create this yourself. There are some things that you can't do (such as accessing elements by name) but other bits are fairly straightforward. The code for this example is on github. For simplicity this implementation only has two items in the struct and they must have the same type. Doing arbitrary items would require working with tuples and other fun stuff that is beyond my template-fu.

The core class contains the two vectors that hold the data. The core building block is this helper struct:

template<typename T>
struct SoaItemRef {
    T &item1;
    T &item2;
    // rest omitted for clarity
};

Whenever we need to access the underlying data we return this struct instead. The refs are initialised to point to the underlying data so all reads and writes get proxied to the actual data. Then there are a bunch of helper methods to assignments. In addition we need iterator definitions. Those can be found in the actual code.

This single struct gets you most functionality. You just need a swap method that does elementwise value copying between SoaItemRefs. The second addition is a value only type that looks like this:

template<typename T>
struct SoaItem {
    T item1;
    T item2;
    // stuff such as comparison operators omitted
};

This allows you to do things like std::find(items.begin(), items.end(), SoaItem<int>{1, 2}).

Missing functionality

There's quite a lot of it. The most obvious thing is that you should be able to specify an arbitrary amount of arbitrary types. The problem here is that this gets really complicated really fast. Another niggle is that the helper structs are standalone where they should actually be inner classes of Soa. I tried to make this happen for quite a while but I could not make std::swap resolve to these inner types. Moving them to standalone classes made everything magically work.

It is possible that you can't do items[2].name = "foo" without resorting to the preprocessor and possibly not even then. It's not going to be pretty in any case. Feel free to try it out, though.