perjantai 7. kesäkuuta 2019

Tweaking the parallel Zip file writer

A few years ago I wrote a command line program to compress and decompress Zip files in parallel. It turned out to work pretty well, but it had one design flaw that kept annoying me.

What is the problem?

Decompressing Zip files in parallel is almost trivial. Each file can be decompressed in parallel without affecting any other decompression task. Fire up N processing tasks and decompress files until finished. Compressing Zip files is more difficult to parallelize. Each file can be compressed separately, but the problem comes from writing the output file.

The output file must be written one file at a time. So if one compressed file is being written to the output file then other compression tasks must wait until it finishes before their output can be written to the result file. This data can not be kept in memory because it is common to have output files that are larger than available memory.

The original solution (and thus the design flaw alluded to) was to to have each compressor write its output to a temporary file. The writer would then read the data from the file, write it to the final result file and delete the temporary file.

This works but means that the data gets written to the file system twice. It may also require up to 2× disk space. The worst case happens when you compress only one very big file. On desktop machines this is not such a problem, but on something like a Raspberry Pi the disk is an SD card, which is very slow. You only want to write it once. SD cards also wear out when written to, which is another reason to avoid writes.

The new approach

An optimal solution would have all of these properties:
  1. Uses all CPU cores 100% of the time (except at the end when there are fewer tasks than cores).
  2. Writes data to the file system only once.
  3. Handles files of arbitrary size (much bigger than available RAM).
  4. Has bounded memory consumption.
It turns out that not all of these are achievable at the same time. Or at least I could not come up with a way. After watching some Numberphile videos I felt like writing a formal proof but quickly gave up on the idea. Roughly speaking since you can't reliably estimate when the tasks finish and how large the resulting files will be, it does not seem possible to choose an optimal strategy for writing the results out to disk.

The new architecture I came up with looks like this:


Rather than writing its result to a temporary file, each compressor writes it to a byte queue with a fixed maximum size. This was chosen to be either 10 or 100 megabytes, which means that in practice most files will fit the buffer. The queue can be in one of three states: not full, full or finished. The difference between the last two is that a full queue is one where the compression task still has data to compress but it can't proceed until the queue is emptied.

The behaviour is now straightforward. First launch compressor tasks as in decompressing. The file writer part will go through all the queues. If it finds a finished queue it will write it to disk and launch a new task. If it finds a full queue it will do the same, but it must write out the whole stream, meaning it is blocked until the current file has been fully compressed. If the compressions takes too long all other compression tasks will finish (or get full) but new ones can't be launched leading to CPU underutilization.

Is it possible to do better?

Yes, but only as a special case. Btrfs supports copying data from one file to another in O(1) time taking only an extra O(1) space. Thus you could write all data to temp files, copy the data to the final file and delete the temp files.

lauantai 1. kesäkuuta 2019

Looking at why the Meson crowdfunding campaign failed

The crowdfunding campaign to create a full manual for the Meson build system ended yesterday. It did not reach its 10 000€ goal so the book will not be produced and instead all contributed money will be returned. I'd like to thank everyone who participated. A special thanks goes out to Centricular for their bronze corporate sponsorship (which, interestingly, was almost 50% of the total money raised).

Nevertheless the fact remains that this project was a failure and a fairly major one at that since it did not reach even one third of its target. This can not be helped, but maybe we can salvage some pieces of useful information from the ruins.

Some statistics

There were a total of 42 contributors to the campaign. Indiegogo says that a total of 596 people visited the project when it was live. Thus roughly 7% of all people who came to the site participated. It is harder to know how many people saw information about the campaign without coming to the site. Estimating based on numbers based on the blog's readership, Twitter reach and other sources puts the number at around 5000 globally (with a fairly large margin of error). This would indicate a conversion rate of 1% of all the people who saw any information about the campaign. In reality the percentage is lower since many of the contributors were people who did not really need convincing. Thus the conversion rate is probably closer to 0.5% or even lower.

The project was set up so that 300 contributors would have been enough to make the project a success. Given the number of people using Meson (estimated to be in the tens of thousands) this seemed like a reasonable goal. Turns out that it wasn't. Given these conversion numbers you'd need to reach 30 000 – 60 000 people in order to succeed. For a small project with zero advertising budget this seems like a hard thing to achieve.

On the Internet everything drowns

Twitter, LinkedIn, Facebook and the like are not very good channels for spreading information. They are firehoses where any one post has an active time of maybe one second if you are lucky. And if you are not, the platforms' algorithms will hide your post because they deem it "uninteresting".  Sadly filtering seems to be mandatory, because not having it makes the firehose even more unreadable. The only hope you have is that someone popular writes about your project. In practice this can only be achieved via personal connections.

Reddit-like aggregation sites are not much better, because you have basically two choices: either post on a popular subreddit or an unpopulare one. In the first case your post probably won't even make it on the front page, all it takes is a few downvotes because the post is "not interesting" or "does not belong here". A post that is not on the front page might not as well even exist; no-one will read it. Posting on an non-popular area is no better. Your post is there but it will reach 10 people and out of those maybe 1 will click on the link.

New sites are great for getting the information out, but they suffer from the same popularity problem as everything else. A distilled (and only slightly snarky) explanation is that news sites write mainly about two things:
  1. Things they have already written about (i.e. have deemed popular)
  2. Things other news sites write about (i.e. that other people have deemed popular)
This is not really the fault of news sites. They are doing their best on a very difficult job. This is just how the world and popularity work. Things that are popular get more popular because of their current popularity alone. Things that are not popular are unlikely to ever become popular because of their current unpopularity alone.

Unexpected requirements

One of the goals of this campaign (or experiment, really) was to see if selling manuals would be a sustainable way to compensate FOSS project developers and maintainers for their work. If working this would be a good way for compensation, because there are already established legal practices for selling books across the world. Transferring money in other ways (donations etc) is difficult and there may be legal obstacles.

Based on this one experiment this does not seem to be a feasible approach. Interestingly multiple people let me know that they would not be participating because the end result would not be released under a free license. Presumably the same people do not complain to book store tellers that "I will only buy this Harry Potter book if, immediately after my purchase, the full book is released for free on the Internet". But for some reason there is a hidden assumption that because a person has released something under a free license, they must publish everything else they do under free licenses as well.

These additional limitations make this model of charging for docs really hard to pull off. There is no possibility of steady, long term money flow because once a book is out under a free license it becomes unsellable. People will just download the free PDF instead. A completely different model (or implementation of the attempted model) seems to be needed.

So what happens next?

I don't really know. Maybe the book can get published through an actual publisher. Maybe not. Maybe I'll just take a break from the whole thing and get back to it later. But to end on some kind of a positive note I have extracted one chapter from the book and have posted it here in PDF form for everyone to read. Enjoy.

sunnuntai 12. toukokuuta 2019

Emulating rpath on Windows via binary patching

A nice feature provided by almost every Unix system is rpath. Put simply it is a way to tell the dynamic linker to look up shared libraries in custom directories. Build systems use it to be able to run programs directly from the build directory without needing to fiddle with file copying or environment variables. As is often the case, Windows does things completely differently and does not have a concept of rpath.

In Windows shared libraries are always looked up in directories that are in the current PATH. The only way to make the dynamic linker look up shared libraries in other directories is to add them to the PATH before running the program. There is also a way to create a manifest file that tells the loader to look up libraries in a special place but it is always a specially named subdirectory in the same directory as the executable. You can't specify an arbitrary path in the manifest, so the libraries need to be copied there. This makes Windows development even more inconvenient because you need to either fiddle with paths, copy shared libraries around or statically link everything (which is slooooow).

If you look at Windows executables with a hex editor, you find that they behave much the same way as their unixy counterparts. Each executable contains a list of dependency libraries that it needs, such as helper.dll. Presumably what happens is that at runtime the dynamic linker will parse the exe file and pass the library names to some lookup function that finds the actual libraries given the current PATH value. This raises the obvious question: what would happen if, somehow, the executable file would have an absolute path written in it rather than just the filename?

It turns out that it works and does what you would expect it to. The backend code accepts absolute paths and resolves them to the correct file without PATH lookups. With this we have a working rpath simulacrum. It's not really workable, though, since the VS toolchain does not support writing absolute paths to dependencies in output files. Editing the result files by hand is also a bit suspicious because there are many things that depend on offsets inside the file. Adding or removing even one byte will probably break something. The only thing we can really do is to replace one string with a different one with the same length.

This turns out to be the same problem that rpath entries have on Unix and the solution is also the same. We need to get a long enough string inside the output file and then we can replace it with a different string. If the replacement string is shorter, it can be padded with null bytes because the strings are treated as C strings. I have written a simple test repository doing this, which can be downloaded from Github.

On unix rpath is specified with a command line argument so it can be padded to arbitrary size. Windows does not support this so we need to fake it. The basic idea is simple. Instead of creating a library helper.dll we create a temporary library called aaaaaaaaaaaaaaaaaaaaaaaa.dll and link the program against that. When viewed in a hex editor the executable looks like this.


Now we can copy the library to its real name in a subdirectory and patch the executable. The result looks like this.


The final name was shorter than what we reserved so there are a bunch of zero bytes in the executable. This program can now be run and it will always resolve to the library that we specified. When the program is installed the entry can be changed to just plain helper.dll in the same way making it indistinguishable from libraries built without this trick (apart from the few extra null bytes).

Rpath on Windows: achieved.

Is this practical?

It's hard to say. I have not tested this on anything except toy programs but it does seem to work. It's unclear if this was the intended behaviour, but Microsoft does take backwards compatibility fairly seriously so one would expect it to keep working. The bigger problem is that the VS toolchain creates many other files, such as pdb debug info files, that probably don't like being renamed like this. These files are mostly undocumented so it's difficult to estimate how much work it would take to make binary hotpatching work reliably.

The best solution would be for Microsoft to add a new linker argument to their toolchain that would write dependency info to the files as absolute paths and to provide a program to rewrite those entries as discussed above. Apple already provides all of this functionality in their core toolchain. It would be nice for MS to do the same. This would simplify cross platform development because it would make all the major platforms behave in the same way.

keskiviikko 8. toukokuuta 2019

Why crowdfunding freely licensed documentation is illegal in Finland

On the Meson manual crowdfunding page it is mentioned that the end result can not be put under a fully free license. Several people have said that they "don't believe such a law could exist" or words to that effect. This blog post is an attempt to to explain the issue in English as all available text about the case is in Finnish. As a disclaimer: I'm not a lawyer, the following is not legal advice, there is no guarantee, even that any of the information below is factual.

To get started we need to go back in time a fair bit and look at disaster relief funds. In Finland you must obtain a permit from the police in order to gather money for general charitable causes. This permit has strict requirements. The idea is that you can't just start a fundraising, take people's money and pocket it, instead the money must provably go to the cause it was raised for. The way the law is written is that a donation to charity is done without getting "something tangible" in return. Roughly if you give someone money and get a physical item in return, it is considered a sales transaction. If you give money to someone and in return get a general feeling of making the world better in some way, that is considered a donation. The former is governed by laws of commerce, the latter by laws of charity fundraising.

A few years ago there was a project to create a book to teach people Swedish. The project is page is here, but it is all in Finnish so it's probably not useful to most readers. They had a crowdfunding project to finish the project with all the usual perks. One of the goals of the crowdfunding was to make the book freely distributable after publishing. This is not unlike funding feature work on FOSS projects works.

What happened next is that the police stepped in and declared this illegal (news story, in Finnish). Their interpretation was that participating in this campaign without getting something tangible in return (i.e. paying less than the amount needed to get the book) was a "charitable donation". Thus it needs a charity permit as explained above. Running a crowdfunding campaign is still legal if it is strictly about pre-sales. That is, every person buys "something" and that something needs to have "independent value" of some sort. If the outcome of a project is a PDF and that PDF becomes freely available, it can be argued that people who participated did not get any "tangible value" in exchange for their money.

Because of this the outcome of the Meson manual crowdfunding campaign can not be made freely available. This may seem a bit stupid, but sadly that's the law. The law is undergoing changes (see here, in Finnish), but those changes will not take effect for quite some time and even when they do it is unclear how those changes would affect these kinds of projects.

keskiviikko 1. toukokuuta 2019

The Meson manual crowdfunding campaign

The Meson Build system has existed for over six years. Even though we have a fairly good set of documentation, there has not been a standalone user's manual for it. Until now.


A crowdfunding campaign to finance the manual has just been launched on Indiegogo. The basic deal is simple, for 30€ you get the final book as a PDF. To minimize work and save trees, there is no physical version. There are also no stickers, beer mats or any other tchotchkies. There are a few purchase options as well as opportunities for corporate sponsorships. Please see the Indiegogo project page for further details. If there are any questions about this campaign feel free to contact me. The easiest way is via email.

Overall I'm quite excited about this campaign. One reason is obviously personal, but the other has to do with sustainability of FOSS projects in general. There has been a lot of talk about how maintainers of open source projects can get compensated for their work. This campaign can be seen as an experiment to see if the crowdfunding model could work in practice.

So if you are just getting started with building software and want a user manual, buy this book. If you have basic experience with Meson and want to dive deeper, buy this book. If you are a seasoned veteran and don't really need a book but want to support the project (specifically me), buy this book. Regardless of anything else, please spread the word on your favourite social media and real world venues of choice.

Let the experiment begin!

maanantai 15. huhtikuuta 2019

An important message to people designing testing frameworks!

Do not, I repeat, NOT make your test framework fail a test run if it writes any text to stderr! No matter how good of on idea you think it is, it's terrible.

If you absolutely, positively have to do that, then print the reason for this failure in your output log. If you can't think of a proper warning message, feel free to copy paste this one:

THIS TEST FAILED BECAUSE IT WROTE TO STDERR AND SOMEONE HERE (OBVIOUSLY NOT ME) THOUGHT MAKING THAT A HARD ERROR WOULD BE A GOOD IDEA!!!!!!!

Sincerely: a person who has lost hours of his life on this sh*t on multiple occasions and can never get it back.

tiistai 26. helmikuuta 2019

Could Linux be made "the best development environment" for games?

It is fairly well established that Linux is not the #1 game development platform at this point in time. It is also unlikely to be the most popular one any time soon due to reasons of economics and inertia. A more interesting question, then, would be can it be made the best one? The one developers favour over all others? The one that is so smooth and where things work so well together that developers feel comfortable and productive in it? So productive that whenever they use some other environment, they get aggravated by being forced to use such a substandard platform?

Maybe.

The requirements

In this context we use "game development" as a catch all term for software development that has the following properties:
  1. The code base is huge, millions or tens of millions lines of code.
  2. Non-code assets are up to tens of gigabytes in size
  3. A program, once built, needs to be tested on a bunch of various devices (for games, using different 3D graphics cards, amounts of memory, processors etc).

What we already have

A combination of regular Linux userland and Flatpak already provides a lot. Perhaps the most unique feature is that you can get the full source code of everything in the system all the way down to the graphic cards' device drivers (certain proprietary hardware vendors notwithstanding). There is no need to guess what is happening inside the graphics stack, you can just add breakpoints and step inside it with full debug info.

Linux as a platform is also faster than competing game development systems at most things. Process invocation is faster, file systems are faster, compilations are faster, installing dependencies is faster. These are the sorts of small gains that translate to better productivity and developer happiness.

Flatpak as a deployment mechanism is also really nice.

What needs to be improved?

Many tools in the Linux development chain assume (directly or indirectly) that doing something from scratch for every change is "good enough". Once you get into large enough scale this no longer works. As an example flatpak-builder builds its packages by copying the entire source tree inside the build container. If your repository is in the gigabyte range this does not work, but instead something like bind mounting should be used. (AFAIK GNOME Builder does something like this already.) Basically every operation needs to be O(delta) instead of O(whole_shebang):
  • Any built program that runs on the developer's machine must be immediately deployable on test machines without needing to do a rebuild on a centralised build server.
  • Code rebuilds must be minimal.
  • Installs must skip files that have not changed since the last install.
  • Package creation must only account for changed files.
  • All file transfers must be delta based. Flatpak already does this for package downloads but building the repo seems to take a while.
A simple rule of thumb for this is that changing one texture in a game and deploying the result on a remote machine should not take more than 2x the amount of time it would take to transfer the file directly over with scp.

Other tooling support

Obviously there needs to be native support for distributed builds. Either distcc, IceCream or something fancier, but even more important is great debugging support.

By default the system should store full debug info and corresponding source code. It should also log all core dumps. Pressing one button should then open up the core file in an IDE with up to date source code available and ready to debug. This same functionality should also be usable for debugging crashes in the field. No crash should go unstored (assuming that there are no privacy issues at play).

Perhaps the hardest part is the tooling for non-coders. It should be possible to create new game bundles with new assets without needing to touch any "dev" tools, even when running a proprietary OS. For example there could be a web service where you could do things like "create new game install on machine X and change model file Y with this uploaded file Z". Preferably this should be doable directly from the graphics application via some sort of a plugin. 

Does something like this already exist?

Other platforms have some of these things built in and some can be added with third party products. There are probably various implementations of these ideas inside the closed doors of many current game development studios. AFAICT there does not exist a fully open product that would combine all of these in a single integrated whole. Creating that would take a fair bit of work, but once done we could say that the simplest way to set up the infrastructure to run a game studio is to get a bunch of hardware, open a terminal and type:

sudo apt install gamestudio