Thursday, December 29, 2022

A quantitative analysis of the Trade Federation's blockade of Naboo

The events of Star Wars Episode I The Phantom Menace are based around a blockade that the Trade Federation holds over the planet Naboo. The details are not explained in the source material but it is assumed that this means that no ship can take off or land on the planet. The blockade is implemented by having a fleet of heavily armed star ships around the planet. What we would like to find out is what sort of an operation this blockade was.

In this analysis we stick only with primary sources, that is, the actual video material. Details on the blockade are sparse. The best data we have is this image.

This is not much to work with, but let's start by estimating how high above the planet the blockade is (assuming that all ships are roughly the same distance from the planet). In order to calculate it from this image we need to know four things

  1. The diameter of the planet
  2. The observed diamater of the planet on the imaging sensor
  3. The physical size of the image sensor 
  4. The focal length of the lens
The gravity on Naboo seems to match that of the Earth pretty closely so we'll use an estimate of 6000 km for the planet's radius. Unfortunately we don't know what imaging systems were in use a long time ago in the galaxy far, far away so to get anywhere we have to assume that space aliens use the same imaging technology we have. This gives a reasonable estimate of 35 mm film paired to a 50 mm lens. Captured image width on 35 mm film is 22 mm, so we'll use that value for sensor width. Width is used instead of height to avoid having to deal with possible anamorphic distortions.

Next we need to estimate the planet's observed size on the imaging sensor. This requires some manual curve fitting in Inkscape.

Scaling until the captured image is 22 mm wide tells us that the planet's observed diameter is 30 mm. Plugging these numbers into the relevant equations tells us that the blockade is 2⋅6000⋅50/30 = 20⋅10³ km away from the planet's center. We call this radius r₁. It is established in multiple movies that space ships in Star Wars can take off and land anywhere on a planet. When the queen's ship escapes they have to fight their way through the blockade which would indicate that it covers the entire planet, otherwise they could have avoided the blockade completely just by changing their trajectory to fly through an area where there are no Trade Federation vessels.

How many ships would this require? In order to calculate that we'd need to know how to place an arbitrary number of points on a sphere so that they are all equidistant. There is no formula for that (or at least that is what I was told at school, did not verify) so let's do a lower bound estimate. We'll assume that the blockade ships are at most 10 km apart. If they were any further, the queen's ship would have had no problems flying between the gaps. Each ship thus covers a circular area whose radius is 10 km. We call this r₂. Assuming perfect distribution of blockade vessels we can compute that it takes A₁/A₂ = π⋅(r₁)²/(π⋅(r₂)²) = (r₁)²/(r₂)² = (20⋅10³)²/10² = 4⋅10⁶ or 4 million ships to fully cover the area.

This is not a profitable operation. Even if each ship had a crew complement of only 10, it would still mean having an invasion force of 40 million people just to operate the blockade. There is no way the tax revenue from Naboo (or any adjoining planets, or possibly the entire galaxy) could even begin to cover the costs of this operation.

The equatorial hypothesis

An alternative approach would be that space ships in the Star Wars universe can't launch from anywhere on the planet, only from equatorial sites taking advantage of the boost given by the planet's rotation.

In this case the blockade force would only need to cover a narrow band over the equator. It would need to block it wholly, however, to prevent launches from spaceports all around the planet. Using the numbers above we can calculate that having a ring of ships 10 km apart at blockade height takes approximately 2⋅π⋅r₁/r₂ = 2⋅π⋅20⋅10³/100 = 1300 ships. This is a bit more feasible but not sufficient, because any escaping ship could avoid the blockade by flying 10 km above or below the equatorial plane. Thus the blockade must have height as well and it takes 1300 ships per each 10 km of blockade so a 100 km tall blockade would take 14 000 ships and a 1000 km one would take 130 000 ships. This is better but still not economically feasible.

The alternative planet size hypothesis

In the film Qui-Gon Jinn and Obi-Wan Kenobi are given a submersible called a bongo and told to proceed though the planet's core to get to Theed. The duration of this journey is not given so we have to estimate it somehow. The journey takes place during a military offensive and not much seems to have taken place during it so we assume that it took one hour. Based on visual inspection the bongo seems to travel at around 10 km/h. These measurements imply that Naboo's diameter is in fact only 10 km.

Plugging these numbers in the formulas above tells us that in this case the blockade is at a height of 16 km and would need to guard a surface area of roughly 900 km². The ship count estimation formula breaks down in this case as it says that it only takes 3 ships to cover the entire surface area. In any case this area could be effectively blocked with just a dozen ships or so. This would be feasible and it would explain other things, too.

If Naboo really is this kind of a supermassive mini planet it most likely has some rare and exotic materials on it. Exporting those to other parts of the galaxy would make financial sense and thus taxing them would be highly profitable. This would also explain why the Trade Federation chose to land their invasion force on the opposite side of the planet. It is as far from Theed's defenses as possible. This makes it a good place to stage a ground assault since moving troops to their final destination still only takes at most a few hours.

Friday, December 23, 2022

After exactly 10 years, Meson 1.0.0 is out

The first ever commit to the Meson repository was made 10 years ago to this day. To celebrate we have just released the long-awaited version 1.0.

The original design criterion for doing a 1.0 release was "when Meson does everything GStreamer needs". This happened, checks notes, three years ago (arguably even earlier than that). That is not the world's fastest reaction time, but that comes mostly down to our development policy. Meson aims to make releases at a steady pace and maintains backwards compatibility fairly well (not perfectly). There is unlikely to ever be a big breaking change, so there is no pressing technical need to bump the major version number.

Thus 1.0 is mostly a symbolical milestone rather than a technical one, end users should not not really notice that big of a difference. This does not mean that the release is any less important, though. To celebrate here is an assortment of random things that have happened over the years. Enjoy.

The greatest achievement

Perhaps the best example demonstrating the maturity of Meson is that I no longer do all the decisions. In fact most decisions and especially the code that goes with it is done by a diverse group of people. In fact I do very little of the actual development, I'm more of a product owner of sorts that can only nudge the project into certain directions rather than being able to turn the entire ship around on a dime. This is a bit sad, but absolutely necessary for long term survival of the project. It means that if one of those malevolent buses that seem to stalk software developers succeeded in hitting me, its effect on the project would not be all that big.


There are two main reasons for reimplementing an existing open source project from scratch. The first one is that the upstream developer is a jerk and people don't want to work with them. The second is that someone, somewhere sees the project as important enough to have a second, independent implementation. I'm happy to report that (as far as I know at least), Meson is in the second group because there is a second from-scratch implementation of Meson called Muon

Meson is implemented in Python but its design was from the very start that this is only an implementation detail. We spent a fair bit of effort ensuring that the Python bits don't leak in the DSL, even by accident. There wasn't really any way of being sure about it short of doing a second implementation and now there is one as Muon is implemented in plain C.

Collaborative design

We all know that disagreeing with other people on the Internet might be very discouraging. However sometimes it works out incredibly well, such as in this merge request. That MR was really the first time a new feature was proposed and the submitter had a very different idea than me of what the API should look like. I distinctly remember feeling anxious about that at the time because I basically had to tell the submitter that their work would not be accepted.

To my surprise everything went really well. Even though there were many people involved and they had wildly different ideas on how to get the feature done, there was no pouting, no stomping of feet, shouting or the like (which, for the record, there had been in other similar discussions). Absolutely everybody involved really wanted to get the feature in and were willing to listen to others and change their stances based on the discussion. The final API turned out to be better than any of the individual proposals.

Thanks to contributors

According to Github statistics, a total of of 717 different people have at least one commit in the repository. This number does not cover all the other people who have contributed in other ways like docs, bug triaging, converting existing projects and so on. It is customary to thank people who have done major contributions like new features in milestone posts like these.

I'm going to do something different instead. In addition to "the big stuff" any project has a ton of less than glamorous work like bug fixing, refactoring, code cleanups and the like. These tasks are just as important as the more glitzy ones, but it sadly go underappreciated in many organisations. To curb this trend I'd like to pick three people to thank for the simple reason that when it turned out that sh*t needed to be done, they rolled up their sleeves and did it. Over and over again.

The first one is Dylan Baker, who has done major reorganisation work in the code including adding a lot of typing hints and fixed the myriad of bugs the adding of said type hints uncovered.

The second person is Eli Schwartz, who has done a ton of work all around, commented on many bug reports and on the Matrix channel. In fact he has done so much stuff that I suspect he never sleeps.

And finally we have Rosen Penev, who has done an astounding amount of work on WrapDB, both fixing existing wraps as well as updating them to new releases.

And finally: a secret

Meson gets a lot of bug reports. A lot a lot. Nirbheek Chauhan, one of the co-maintainers, once told me that Meson generates more bug email than all Gnome projects combined. I try my best to keep up with them, but the sad truth is that I don't have time to read most of them. Upon every release I have to clean up my mailbox by deleting all Meson bug mail.

The last time I did this I nuked more than 500 email threads in one go. No, not emails, email threads. So if you have wondered why your bug report has not gotten any replies, this is why. Simply reading the contents of Meson bug emails would be more than a full time job. Such is the price of success, I guess.

Monday, December 12, 2022

Print quality PDF generation, color separations, other fun stuff

Going from the simple color managed PDF generator discussed in the previous blog post into something more useful requires getting practical. So here is a screenshot of a "print ready" PDF document I generated with the code showing a typical layout you'd use for a softcover book. As printers can't really print all the way to the edges of paper, the cover needs to be printed to a larger sheet and then cut to its final size.

It's not of artistically high quality, granted, but most of the technical bits are there:

  • The printed page is noticeably bigger than the "active area" and has a bunch of magic squiggles needed by the printing house
  • The output is fully color managed CMYK
  • The gray box represents the bleed area and in a real document the cover image would extend over it, but I left it like this for visualization purposes.
  • Text can be rendered and rotated (see spine)
  • The book title is rendered with gold ink, not CMYK inks
  • There are colorbars for quality control
  • The registration and cut marks (the "bullseyes" and straight lines at paper corners, respectively) are drawn on all plates using PDF's builtin functionality so they are guaranteed to be correctly aligned
  • None of the prepress marks are guaranteed to be actually correct, I just swiped them from various sources
The full PDF can be downloaded from this link. From this print PDF, we can generate separations (or "printing plates") for individual ink components using Ghostscript.

Looking at this you can find several interesting things. For example the gray box showing the bleed area is composed of C, M and Y inks instead of only K, even though it was originally defined as a pure gray in RGB. This is how LittleCMS chose to convert it and it might or might not be what the original artist had in mind. High quality PDF generation is full of little quirks like this, blindly throwing numbers at color conversion functions is not enough to get good results, end users might need fairly precise control over low level operations.

Another thing to note is how the renderer has left "holes" for the book title in CMYK plates even though all color is in the gold ink plate. This avoids mixing inks but on the other hand requires someone to do proper trapping. That is its own can of worms, but fortunately most people can let the RIP handle it (I think).

Sunday, December 4, 2022

Color management, this time with PDF

In previous posts the topic of color management in Cairo was examined. Since then people have told me a few things about the issue. According to them (and who am I do to proper background research and fact checking, I'm just someone writing on the Internet) there are a few fundamental problems with Cairo. The main one is that Cairo's imaging model is difficult to implement in GPU shaders. It also is (again, according to Internet rumors) pretty much impossible to make work with wide gamut and HDR content.

Dealing with all that and printing (which is what I was originally interested in) seems like a too large a mouthful to swallow. One thing lead to another and thus in the spirit of Bender, I wrote my own color managed PDF generator library. It does not try to do any imaging or such, just exposes the functionality that is in the PDF image and document model directly. This turned out to take surprisingly little work because this is a serialization/deserialization problem rather than an image processing one. You just dump the draw commands and pixels to a file and let the PDF viewer take care of showing them. Within a few days I had this:

This is a CMYK PDF that is fully color managed. The image on the second page was originally an RGB PNG image with an alpha channel  that was converted to CMYK automatically. The red square is not part of the image, it is there to demonstrate that transparency compositing works. All drawing commands use the /DeviceCMYK color space. When creating the PDF you can select whether the output should be in RGB, grayscale or CMYK and the library automatically generates the corresponding PDF commands. All of the heavy lifting is done by LittleCMS, there are no unmanaged color conversions in the code base.

Since everything was so straightforward, I went even further. That screenshow is not actually showing a CMYK PDF. The yellow text on the purple background is a spot color that uses a special gold ink. Thus the PDF has five color channels instead of four. Those are typically used only in high quality print shops for special cases like printing with specific Pantone inks or specifying which parts of the print should be em/debossed, varnished or the like.

What would it take to use this for realsies?

There does seem to be some sort of a need for a library that produces color managed PDFs. It could be used at least by Inkscape and Gimp, possibly others as well. In theory Cairo could also use it for PDF generation so it could delete its own PDF backend code (and possibly also the PostScript one) and concentrate only on pushing and compositing pixels. Then again, maybe its backwards compatibility requirements are too strict for that.

In any case the work needed splits neatly into two parts. The first one is exposing the remaining functionality in PDF in the API. Most of it is adding functions like "draw a bezier with values ..." and writing out the equivalent PDF command. As the library itself does not even try to have its own imaging model, it just passes things directly on. This takes elbow grease, but is fairly simple.

The other part is text. PDF's text model predates Unicode so it is interesting to say the least. The current code only supports (a subset of) PDF builtin fonts and really only ASCII. To make things actually work you'd probably need to reimplement Cairo's glyph drawing API. Basically you should be able to take PangoCairo, change it a bit and point it to the new library and have things work just as before. I have not looked into how much work that would actually be.

There are currently zero tests and all validation has been done with the tried and true method of "try the output on different programs and fix issues until they stop complaining". For real testing you'd need access to a professional level printer or Adobe Acrobat Pro and I have neither.

Tuesday, November 29, 2022

Going inside Cairo to add color management

Before going further you might want to read the previous blog post. Also, this:

I don't really have prior experience with color management, Cairo internals or the like. I did not even look at the existing patchsets for this. They are fairly old so they might have bitrotted and debugging that is not particularly fun. This is more of a "the fun is in the doing" kind of thing. What follows is just a description of things tried, I don't know if any of it would be feasible for real world use.

Basically this is an experiment. Sometimes experiments fail. That is totally fine.

Main goals

There are two things that I personally care about: creating fully color managed PDFs (in grayscale and CMYK) and making the image backend support images in colorspaces other than sRGB (or, more specifically, "uncalibrated RGB which most of the time is sRGB but sometimes isn't"). The first of these two is simpler as you don't need to actually do any graphics manipulations, just specify and serialize the color data out to the PDF file. Rendering it is the PDF viewer's job. So that's what we are going to focus on.

Color specification

Colors in Cairo are specified with the following struct:

struct _cairo_color {
    double red;
    double green;
    double blue;
    double alpha;

    unsigned short red_short;
    unsigned short green_short;
    unsigned short blue_short;
    unsigned short alpha_short;

As you can probably tell it is tied very tightly to the fact that internally everything works with (uncalibrated) RGB, the latter four elements are used for premultiplied alpha computations. It also has a depressingly amusing comment above it:

Fortunately this struct is never exposed to the end users. If it were it could never be changed without breaking backwards compatibility. Somehow we need to change this so that it supports other color models. This struct is used a lot throughout the code base so changing it has the potential to break many things. The minimally invasive change I could come up with was the following:

struct _comac_color {
    comac_colorspace_t colorspace;
    union {
        struct _comac_rgb_color rgb;
        struct _comac_gray_color gray;
        struct _comac_cmyk_color cmyk;
    } c;

The definition of rbg color is the original color struct. With this change every usage of this struct inside the code base becomes a compile error which is is exactly what we want. At every point the code is changed so that it first asserts that colorspace is RGB and then accesses the rgb part of the union. In theory this change should be a no-op and unless someone has done memcpy/memset magic, it is also a no-op in practice. After some debugging (surprisingly little, in fact) this change seemed to work just fine.

Color conversion

Ideally you'd want to use LittleCMS but making it a hard dependency seems a bit suspicious. There are also use cases where people would like to use other color management engines and even select between them at run time. So a callback functions it is:

typedef void (*comac_color_convert_cb) (comac_colorspace_t,
                                        const double *,
                                        double *,
                                        void *);

This only converts a single color element. A better version would probably need to take a count so it could do batch operations. I had to put this in the surface struct, since they are standalone objects that can be manipulated without a drawing context.

Generating a CMYK PDF stream is actually fairly simple for solid colors. There are only two color setting operators, one for stroking and one for nonstroking color (there may be others in e.g. gradient batch definitions, I did not do an exhaustive search). That code needs to be changed to convert the color to match the format of the output PDF and then serialize it out.

CMYK PDF output

With these changes creating simple CMYK PDF files becomes fairly straightforward. All you need to do as the end user is to specify color management details on surface creation:

comac_pdf_surface_create2 (

and then enjoy the CMYK goodness:

What next?

Probably handling images with ICC color profiles.

Saturday, November 26, 2022

Experimenting on how to add CMYK and color management to Cairo

Cairo is an amazing piece of tech that powers a lot of stuff, like all of GTK. Unfortunately it is not without its problems. The biggest one being that it was designed almost 20 years ago with the main use case of dealing with "good old" 8 bit uncalibrated RGB images. There has been a lot of interest in adding native support for things like CMYK documents, linear RGB, color calibration, wide gamuts and all of that good stuff. Sadly it has not come to be.

The reasons are mostly the same as always. The project is sadly understaffed and there does not seem to be a corporate sponsor to really drive the development forward. What makes things extra difficult is that Cairo supports a lot of different platforms like Postscript, Win32, Quartz and SVG. So if someone wants to add new features in Cairo, not only do they need to understand how color math works and how to do C, they would also need to handle all the various backends. That is a rare combination of skills. In any case the patchset needed to make all that happen would be enormous and thus hard to get reviewed and merged.

As an exercise I thought I'd see if I could change the landscape somewhat to make it easier to experiment with the code base. Out of this came this project repo called Comac (for, obviously, COlor MAnaged Cairo). The creation process was fairly straightforward:

  • Take the latest Cairo trunk
  • Delete every backend except image and PDF
  • Rename all files and symbols from cairo-something to comac-something
  • Add minimal code for generating grayscale and CMYK PDFs
  • Create a demo app that creates test PDFs
The output files are very simple, but the whole thing actually works. Both Okular and Acrobat Reader will happily display the documents without errors. Thus people who are interested in color work can now actually look into it without having to understand how Xlib works. Since Comac does not need to follow all of Cairo's backwards compatibility guarantees, experimenters can play a bit more fast & loose.

Want to contribute?

Merge requests welcome, I guess? I have not thought too deeply about governance and all that. If there is a lot of interest, this could be moved to its own top level project rather than living in my Github pages. Just note that the goal is not to fork things and create yet another graphics library. In the best possible case this project is only used to discover a good API and an idea on how it could best be implemented. These changes could then be put into Cairo core.

Monday, November 14, 2022

If you don't tolerate it in new code you should not tolerate it in old code either

Let's assume that you are working on a code base and notice that it has some minor issue. For argument's sake we'll say that it has some self written functionality and that the language's standard library has added identical functionality recently. Let's further assume that that said implementation behaves exactly the same as the self written one. At this point you might decide to clean up the code base, make it use the stdlib implementation and delete the custom code. This seems like a nice cleanup so you then file merge request to get the thing changed.

It might be accepted and merged without problems. On the other hand it might be blocked, either temporarily or permanently. There are several valid reasons for not merging the change. For example:

  1. That part of the code does not have sufficient tests so we don't know if the change is safe.
  2. A release is coming up soon, so we are minimizing all changes that might cause regressions, no matter how minor. The merge can only be done after the release is out.
  3. We need to support old platform X, whose stdlib does not have that functionality.
  4. It's not immediately obvious that the two implementations behave identically (for example, because the old implementation has load-bearing bugs) so a change might introduce bugs.

Getting blocked like this is a bit unfortunate, but these things happen. The important thing, however, is that all these are solid technical reason for not doing the cleanup (or at least not do it immediately). Things get bad when you get blocked by some other reason, such by your reviewer asking "why are you even doing this at all". This is a reasonable question, so let's examine it in detail.

Suppose that instead of submitting a cleanup commit you are instead submitting a piece of completely new functionality. In this MR you have chosen to reimplement a piece of standard library code (for no actual gain, just because you were not aware of its existence). The review comment that you should get is "You are reimplementing stdlib functionality, delete that code here and use the standard library instead". This is a valid review comment and something that should be heeded.

The weird thing here is that this is in a way the exact same change, but it is either not acceptable or absolutely necessary depending on whether parts of the code are already inside your repo or not. This is weird and should not be the case, but human beings are strangely irrational and their "value functions" are highly asymmetric. This can lead to lots of review fighting if one person really wants to fix the issue whereas some other one does not see the value in it. The only real solution is to have a policy on this, specifically that submitting a change that fixes an issue that would be unacceptable in new code is, by itself, a sufficient reason to do the work but not to merge it without technical review. This shifts the discussion from "should this be done at all" to "what are the technical risks and merits of this change", which is the way reviews should be done.

I should emphasize that this does not mean that you should go and immediately spend 100% of your time fixing all these existing issues in your code base. You could, but probably it is not worth it. What this guideline merely says that if you happen to come across an issue like this and if you feel like fixing it then you can do it and reviewers can't block you merely by "I personally don't like this" line of reasoning.

A nastier version

Suppose again you are a person who cares about the quality of your work. That you want to write code that is of sufficiently good quality and actually care about things like usability, code understandability and long term maintainability. For you people there exists a blocker comment that is much worse than the one above. In fact is the single biggest red flag for working conditions I have ever encountered:

But we already have [functionality X] and it works.

This question does have a grain of truth in it, existing code does have value. Sadly it is most commonly used as a convenient way to hardblock the change without needing to actually think about it or having to explain your reasoning.

Several times I've had to make some change into existing code and hours and days of debugging later found that the existing self written code has several bugs that a stdlib implementation definitely would not have. After reporting my findings and suggesting a cleanup the proposal has been knocked out with reasoning above. As a consultant I have to remain calm and respect the client's decision but the answer I have wanted to shout every time is: "No it's not! It's completely broken. I just spent [a large amount of time] fixing it because it was buggy and looking at the code makes me suspect that there are a ton of similar bugs in it. Your entire premise is wrong and thus every conclusion drawn from it is incorrect."

And, to reiterate, even this is not a reason by itself to actually go and change the code. It might get changed, it might not. The point is the prevailing attitude around minor fixups and how it matches your personal desire of getting things done. If you are a "fixing things proactively makes sense" kind of person you are probably not going to be happy in a "let's hide under the bed and pretend everything is fine" kind of organization and vice versa.

Sunday, October 23, 2022

Making Visual Studio compilers directly runnable from any shell (yes, even plain cmd.exe)

The Visual Studio compiler toolchain behaves in peculiar ways One of the weirdest is that you can't run the compiler from any shell. Instead you have to run the compiler either from a special, blessed shell that comes with VS (the most common are "x86 native tools shell" and "x64 native tools shell", there are also ARM shells as well as cross compilation shells) or by running a special bat file inside a pristine shell that sets things up. A commonly held misbelief is that using the VS compiler only requires setting PATH correctly. That is not true, it requires a bunch of other stuff as well (I'm not sure if all of that is even documented).

To anyone who has used unixy toolchains, this is maddening. The classic Unix approach is to have compiler binaries with unique names like a hypothetical armhf-linux-gcc-11 from any shell. Sadly this VS setup has been the status quo for decades now and it is unlikely to change. In fact, some times ago I had a discussion with a person from Microsoft where I told them about this problem and the response I got back was, effectively: "I don't understand what the problem is" followed by "just run the compiles from the correct shell".

So why is this a bad state of things then? There are two major issues. The first one is that you have to remember how every one of your build trees has been set up. If you accidentally run a compilation command using the wrong shell, the outcome is very undefined. This is the sort of things that happens all the time because human beings are terrible at remembering the states of complicated systems and specific actions that need to be taken depending on their state (as opposed to computers, which are exceptionally good at those things). The second niggle is that you can't have two different compilers active in the same shell at the same time. So if, for example, you are cross compiling and you need to build and run a tool as part of that compilation (e.g. Protobuf) then you can't do that with the command line VS tools. Dunno if it can be with solution files either.

Rolling up them sleeves

The best possible solution would be for Microsoft to provide compiler binaries that are standalone and parallel runnable with unique names like cl-14-x64.exe. This seems unlikely to happen in the near future so the only remaining option is to create them ourselves. At first this might seem infeasible but the problem breaks neatly down into two pieces:

  • Introspect all changes that the vsenv setup bat file performs on the system.
  • Generate a simple executable that does the same setup and then invokes cl.exe with the same command line arguments as were given to it.

The code that implements this can be found in this repository. Most of it was swiped from Meson VS autoactivator. When you run the script in a VS dev tools shell (it needs access to VS to compile the exe) you get a cl-x64.exe that you can then use from any shell. Here we use it to compile itself for the second time:


Process invocation on Windows is not particularly fast and with this approach every compiler invocation becomes two process invocations. I don't know enough about Windows to know whether one could avoid that with dlopen trickery or the like.

For actual use you'd probably need to generate these wrappers for VS linkers too.

You have to regenerate the wrapper binary every time VS updates (at least for major releases, not sure about minor ones).

The end results has not been tested apart from simple tests. It is a PoC after all.

Thursday, October 6, 2022

Using cppfront with Meson

Recently Herb Sutter published cppfront, which is an attempt to create C++ a new syntax to fix many issues that can't be changed in existing C++ because of backwards compatibility. Like with the original cfront compiler, cppfront works by parsing the "new syntax" C++ and transpiling it to "classic" C++, which is then compiled in the usual way. These kinds of source generators are fairly common (it is basically how Protobuf et al work) so let's look at how to add support for this in Meson. We are also going to download and build the cppfront compiler transparently.

Building the compiler

The first thing we need to do is to add Meson build definitions for cppfront. It's basically this one file:

project('cppfront', 'cpp', default_options: ['cpp_std=c++20'])

cppfront = executable('cppfront', 'source/cppfront.cpp',
  override_options: ['optimization=2'])

meson.override_find_program('cppfront', cppfront)
cpp2_dep = declare_dependency(include_directories: 'include')

The compiler itself is in a single source file so building it is simple. The only thing to note is that we override settings so it is always built with optimizations enabled. This is acceptable for this particular case because the end result is not used for development, only consumption. The more important bits for integration purposes are the last two lines where we define that from now on whenever someone does a find_program('cppfront') Meson does not do a system lookup for the binary but instead returns the just-built executable object instead. Code generated by cppfront requires a small amount of helper functionality, which is provided as a header-only library. The last line defines a dependency object that carries this information (basically just the include directory).

Building the program

The actual program is just a helloworld. The Meson definition needed to build it is this:

project('cpp2hello', 'cpp',
    default_options: ['cpp_std=c++20'])

cpp2_dep = dependency('cpp2')
cppfront = find_program('cppfront')

g = generator(cppfront,
  output: '@BASENAME@.cpp',
  arguments: ['@INPUT@', '-o', '@OUTPUT@']

sources = g.process('sampleprog.cpp2')

executable('sampleprog', sources,
   dependencies: [cpp2_dep])

That's a bit more code but still fairly straightforward. First we get the cppfront program and the corresponding dependency object. Then we create a generator that translates cpp2 files to cpp files, give it some input and compile the result.

Gluing it all together

Each one of these is its own isolated repo (available here and here respectively). The simple thing would have been to put both of these in the same repository but that is very inconvenient. Instead we want to write the compiler setup once and use it from any other project. Thus we need some way of telling our app repository where to get the compiler. This is achieved with a wrap file:


cpp2 = cpp2_dep
program_names = cppfront

Placing this in the consuming project's subprojects directory is all it takes. When you start the build and try to look up either the dependency or the executable name, Meson will see that they are provided by the referenced repo and will clone, configure and build it automatically:

The Meson build system
Version: 0.63.99
Source dir: /home/jpakkane/src/cpp2meson
Build dir: /home/jpakkane/src/cpp2meson/build
Build type: native build
Project name: cpp2hello
Project version: undefined
C++ compiler for the host machine: ccache c++ (gcc 11.2.0 "c++ (Ubuntu 11.2.0-19ubuntu1) 11.2.0")
C++ linker for the host machine: c++ ld.bfd 2.38
Host machine cpu family: x86_64
Host machine cpu: x86_64
Found pkg-config: /usr/bin/pkg-config (0.29.2)
Found CMake: /usr/bin/cmake (3.22.1)
Run-time dependency cpp2 found: NO (tried pkgconfig and cmake)
Looking for a fallback subproject for the dependency cpp2

Executing subproject cppfront 

cppfront| Project name: cppfront
cppfront| Project version: undefined
cppfront| C++ compiler for the host machine: ccache c++ (gcc 11.2.0 "c++ (Ubuntu 11.2.0-19ubuntu1) 11.2.0")
cppfront| C++ linker for the host machine: c++ ld.bfd 2.38
cppfront| Build targets in project: 1
cppfront| Subproject cppfront finished.

Dependency cpp2 from subproject subprojects/cppfront found: YES undefined
Program cppfront found: YES (overridden)
Build targets in project: 2

As you can tell from the logs, Meson first tries to find the dependencies from the system and only after it fails does it try to download them from the net. (This behaviour can be altered.) Now the code can be built and the end result run:

$ build/sampleprog
Cpp2 compilation is working.

The code has only been tested with GCC but in theory it should work with Clang and VS too.

Wednesday, September 28, 2022

"Why is it that package managers are unnecessarily hard?" — or are they?

At the moment the top rated post in In the C++ subreddit is Why is it that package managers are unnecessarily hard?. The poster wants to create an application that uses fmt and SDL2. After writing a lengthy and complicated (for the task) build file, installing a package manager, integrating the two and then trying to build their code the end result fails leaving only incomprehensible error messages in its wake.

The poster is understandably frustrated about all this and asks a reasonable question about the state of package management. The obvious follow-up question, then, would be whether they need to be hard. Let's try to answer that by implementing the thing they were trying to do from absolute scratch using Meson. For extra challenge we'll do it on Windows to be entirely sure we are not using any external dependency providers.


  • A fresh Windows install with Visual Studio
  • No vcpkg, Conan or any other third party package manager installed (more strictly, they can be installed, just ensure that they are not used)
  • Meson installed so that you can run it just by typing meson from a VS dev tools command prompt (if you set it up so that you run python or, adjust the commands below accordingly)
  • Ninja installed in the same way (you can also use the VS solution generator if you prefer in which case this is not needed)

The steps required

Create a subdirectory to hold source files.

Create a file in said dir with the following contents.

project('deptest', 'cpp',
    default_options: ['default_library=static',
fmt_dep = dependency('fmt')
sdl2_dep = dependency('sdl2')
executable('deptest', 'deptest.cpp',
   dependencies: [sdl2_dep, fmt_dep])

Create a deptest.cpp file in the same dir with the following contents:


int main(int, char**) {
        fmt::print("Unable to initialize SDL: {}", SDL_GetError());
        return 1;
    SDL_version sdlver;
    fmt::print("Currently using SDL version {}.{}.{}.",
               sdlver.major, sdlver.minor, sdlver.patch);
    return 0;

Start a Visual Studio x64 dev tools command prompt, cd into the source directory and run the following commands.

mkdir subprojects
meson wrap install fmt
meson wrap install sdl2
meson build
ninja -C build

This is all you need to do to get the following output:

Currently using SDL version 2.24.0.

Most people would probably agree that this is not "unnecessarily hard". Some might even call it easy.

Monday, September 19, 2022

Diving deeper into custom PDF and epub generation

In a previous blog post I looked into converting a custom markup text format into "proper" PDF and epub documents. The format at the time was very simple and could not do even simple things like italic text. At the time it was ok, but as time went on it seemed a bit unsatisfactory.

Ergo, here is a sample input document:

# Demonstration document

This document is a collection of sample paragraphs that demonstrate
the different features available, like /italic text/, *bold text* and
even |Small Caps text|. All of Unicode is supported: ", », “, ”.

The previous paragraph was not indented as it is the first one following a section title. This one is indented. Immediately after this paragraph the input document will have a scene break token. It is not printed, but will cause vertical white space to be added. The
paragraph following this one will also not be indented.


A new scene has now started. To finish things off, here is a
standalone code block:

/* Cool stuff here */

This is "Markdown-like" but specifically not Markdown because novel typesetting has requirements that can't easily be retrofit in Markdown. When processed this will yield the following output:

Links to generated documents: PDF, epub. The code can be found on Github.

A look in the code

An old saying goes that that the natural data structure for any problem is an array and if it is not, then change the problem so that it is. This turned out very much to be the case in this problem. The document is an array of variants (section, paragraph, scene change etc). Text is an array of words (split at whitespace) which get processed into output, which is an array of formatted lines. Each line is an array of formatted words.

For computing the global chapter justification and final PDF it turned out that we need to be able to render each word in its final formatted form, and also hyphenated sub-forms, in isolation. This means that the elementary data structure is this:

struct EnrichedWord {
    std::string text;
    std::vector<HyphenPoint> hyphen_points;
    std::vector<FormattingChange> format;
    StyleStack start_style;

This is "pure data" and fully self-contained. The fields are obvious: text has the actual text in UTF-8. hyphen_points lists all points where the word can be hyphenated and how. For example if you split the word "monotonic" in the middle you'd need to add a hyphen to the output but if you split the hypothetical combination word "meta–avatar" in the middle you should not add a hyphen, because there is already an en-dash at the end. format contains all points within the word where styling changes (e.g. italic starts or ends). start_style is the only trickier one. It lists all styles (italic, bold, etc) that are "active" at the start of the word and the order in which they have been declared. Since formatting tags can't be nested, this is needed to compute and validate style changes within the word.

Given an array of these enriched words the code computes another array of all possible points where the text stream can be split, both within and between words. The output of this algorithm is then yet another array. It contains all the split points. With this the final output can be created fairly easily: each output line is the text between split points n and n+1.

The one major missing typographical feature missing is widow and orphan control. The code merely splits the page whenever it is full. Interestingly it turns out that doing this properly is done with the same algorithm as paragraph justification. The difference is that the penalty terms are things like "widow existence" and "adjacent page height imbalance".

But that, as they say, is another story. Which I have not written yet and might not do for a while because there are other fruit to fry.

Sunday, September 4, 2022

Questions to ask a prospective employer during a job interview

Question: Do developers in your organization have full admin rights on their own computer?

Rationale: While blocking admin rights might make sense for regular office workers it is a massive hindrance for software developers. They do need admin access for many things and not giving it to them is a direct productivity hit. You might also note that Google does give all their developers root access to their own dev machines and see how they respond.

Question: Are developers free to choose and install the operating system on their development machines? If yes, can you do all administrative and bureaucracy task from "non-official" operating systems?

Rationale: Most software projects nowadays deal with Linux somehow and many people are thus more productive (and happier) if they can use a Linux desktop for their development. If the company mandates the use of "IT-approved" Windows install where 50% of all CPU time is spent on virus scanners and the like, productivity takes a big hit. There are also some web services that either just don't work on Linux or are a massive pain to use if they do (the web UI of Outlook being a major guilty party here).

Question: How long does it take to run the CI for new merge requests?

Rationale: Anything under 10 minutes is good. Anything over 30 minutes is unacceptably slow. Too slow of a CI means that instead of submitting small, isolated commits people start aggregating many changes into a small number of mammoth commits because it is the only way to get things done. This causes the code quality to plummet.

Question: Suppose we find a simple error, like a typo in a log message. Could you explain the process one needs to follow to get that fixed and how long does it approximately take? Explicitly list out all the people in the organization that are needed to authorize said change.

Rationale: The answer to this should be very similar to the one above: make the commit, submit for review, get ack, merge. It should be done in minutes. Sometimes that is not the case. Maybe you are not allowed to work on issues that don't have an associated ticket or that are not pre-approved for the current sprint. Maybe you need to first create a ticket for the issue. Maybe you first need to get manager approval to create said ticket (You laugh, but these processes actually exist. No, I'm not kidding.). If their reply contains phrases like "you obtain approval from X", demand details: how do you apply for approval, who does it, how long is it expected to take, what happens if your request is rejected, and so on. If the total time is measured in days, draw your own conclusions and act accordingly

Question: Suppose that I need to purchase some low-cost device like a USB hub for development work. Could you explain the procedure needed to get that done? 

Rationale: The answer you want is either "go to a store of your choice, buy what you need and send the receipt to person X" or possibly "send a link to person X and it will be on your desk (or delivered home) within two days". Needing to get approval from your immediate manager is sort of ok, but needing to go any higher or sideways in the org chart is a red flag and so is needing to wait more than a few days regardless of the reason.

Question: Could you explain the exact steps needed to get the code built?

Rationale: The steps should be "do a git clone, run the build system in the standard way, compile, done". Having a script that you can run that sets up the environment is also fine. Having a short wiki page with the instructions is tolerable. Having a long wiki page with the instructions is bad. "Try compilng and when issues arise ask on slack/teams/discord/water cooler/etc" is very bad.

Question: Can you build the code and run tests on the local machine as if it was a standard desktop application?

Rationale: For some reason corporations love creating massive build clusters and the like for their products (which is fine) and then make it impossible to build the code in isolation (which is not fine). Being able to build the project on your own machine is pretty much mandatory because if you can't build locally then e.g. IDE autocompletions does not work because there is no working compile_commands.json.

This even applies for most embedded projects. A well designed embedded project can be configured so that most code can be built and tested on the host compiler and only the hardware-touching bits need cross compilation. This obviously does not cover all cases, such as writing very low level firmware that is mostly assembly. You have to use your own judgement here.

Question: Does this team or any of the related teams have a person who actively rejects proposals to improve the code base?

Rationale: A common dysfunction in existing organizations is to have a "big fish in a small pond" developer, one that has been working on said code for a long time but which has not looked at what has been happening in the software development landscape in general. They will typically hard reject all attempts to improve the code and related processes to match current best practices. They typically use phrases like "I don't think that would improve anything", "That can't work (no further reasoning given)" and the ever popular "but we already have [implementation X, usually terrible] and it works". In extreme cases if their opinions are challenged they resort to personal attacks. Because said person is the only person who truly understands the code, management is unwilling to reprimand them out of fear that they might leave.

Out of all the questions in this list, this one is the most crucial. Having to work with such a person is a miserable experience and typically a major factor in employee churn. This is also the question that prospective employers are most likely to flat out lie to you because they know that if they admit to this, they can't hire anyone. If you are interviewing with a manager, then they might not even know that they have such a person in their team. The only reliable way to know this is to talk with actual engineers after they have had several beers. This is hard to organize before getting hired.

Question: How many different IT support organizations do you have. Where are they physically located?

Rationale: The only really acceptable answer is "in the same country as your team" (There are exceptions, such as being the only employee in a given country working 100% remote). Any other answer means that support requests take forever to get done and are a direct and massive drain on your productivity. The reason for this inefficiency is that if you have your "own" support then you can communicate with each other like regular human beings. If they are physically separated then you just became just another faceless person in a never ending ticketing queue somewhere and things that should take 15 minutes take weeks (these people typically need to serve many organisations in different countries and are chronically overworked).

The situation is even worse if the IT support is moved to physically distant location and even worse if it is bought as a service from a different corporation. A typical case is that a corporation in Europe or USA outsources all their IT support to India or Bangladesh. This is problematic, not because people in said countries are not good at their jobs (I've never met them so I can't really say) but because these transfers are always done to minimize costs. Thus a core part of the organization's engineering productivity is tied to an organisation that is 5-10 time zones away, made the cheapest offer and over which you can't even exert any organizational pressure over should it be needed. This is not a recipe for success. If there are more than one such external companies within the organization, failure is almost guaranteed.

Question: Suppose the team needs to start a new web service like a private Gitlab, add new type of CI to the testing pipeline or something similar. Could you explain the exact steps needed to get it fully operational? Please list all people who need to do work to make this happen (including just giving authorization), and time estimates for each individual step.

Rationale: This follows directly from the above. Any answer that has more than one manager and takes more than a day or two is a red flag.

Question: Please list all the ways you are monitoring your employees during work hours. For example state whether you have a mandatory web proxy that everyone must use and enumerate all pieces of security and tracking software you have installed on employees' computers. Do you require all employees to assign all work hours to projects? If yes, what granularity? If the granularity is less than 1 hour, does your work sheet contain an entry for "entering data into work hour enumeration sheet"?

Rationale: This one should be fairly obvious, but note that you are unlikely to get a straight answer.

Friday, September 2, 2022

Looking at LibreOffice's Windows installer

There has long been a desire to get rid of Cygwin as a build dependency on LibreOffice. In addition to building dependencies it is also used to create the Windows MSI installer. Due to reasons I don't remember any more I chose to look into replacing just that bit with some modern tooling. This is the tragedy that followed.

The first step to replacing something old is to determine what and how the old system works. Given that it is an installer one would expect it to use WiX, NSIS or maybe even some less know installer tool. But of course you already guessed that's not going to be the case. After sufficient amounts of digging you can discover that the installer is invoked by this (1600+ line) Perl script. It imports 50+ other internal Perl modules. This is not going to be a fun day.

Eventually you stumble upon this file and uncover the nasty truth. The installer works by manually building a CAB file with makecab.exe and Perl. Or, at least, that is what I think it does. With Perl you can never be sure. It might even be dead code that someone forgot to delete. So I asked from my LO acquaintances if that is how it actually works. The answer? "Nobody knows. It came from somewhere and was hooked to the build and nobody has touched it since."

When the going gets tough, the tough starts compiling dependencies from scratch

In order to see if that is actually what is happening, we need to to be able to run it and see what it does. For that we need to first compile LO. This is surprisingly simple, there is a script that does almost all of the gnarly bits needed to set up the environment. So then you just run it? Of course you don't.

When you start the build, first things seem to work fine, but then one of the dependencies misdetects the build environment as mingw rather than cygwin and then promptly fails to build. Web searching finds this email thread which says that the issue has been fixed. It is not.

I don't even have mingw installed on this machine.

It still detects the environment as mingw.

Then I uninstalled win-git and everything that could possibly be interpreted as mingw.

It still detects the environment as mingw.

Then I edited the master Makefile to pass an explicit environment flag to the subproject's configure invocation.

It still detects the environment as mingw.

Yes, I did delete all cached state I could think of between each step. It did not help.

I tried everything I could and eventually had to give up. I could not make LO compile on Windows. Back to the old drawing board.

When unzipping kills your machine

What I actually wanted to test was to build the source code, take the output directory and then pass that to the msicreator program that converts standalone directories to MSI installers using WiX. This is difficult to do if you can't even generate the output files in the first place. But there is a way to cheat.

We can take the existing LO installer, tell the Windows installer to just extract the files in a standalone directory and then use that as the input data. So then I did the extraction and it crashed Windows hard. It brought up the crash screen and the bottom half of it had garbled graphics. Which is an impressive achievement for what is effectively the same operation as unpacking a zip file. Then it did it again. The third time I had my phone camera ready to take a picture but that time it succeeded. Obviously.

After fixing a few bugs in msireator and the like I could eventually build my own LibreOffice installer. I don't know if it is functionally equivalent but at least most of the work should be there. So, assuming that you can do the equivalent of DESTDIR=/foo make install with LO on Windows then you should be able to replace the unmaintained multi-thousand line Perlthulhu with msicreator and something like this:

    "upgrade_guid": "SOME-GUID-HERE",
    "version": "7.4.0",
    "product_name": "LibreOffice",
    "manufacturer": "LibreOffice something something",
    "name": "LibreOffice",
    "name_base": "libreoffice",
    "comments": "This is a comment.",
    "installdir": "printerdir",
    "license_file": "license.rtf",
    "need_msvcrt": true,
    "parts": [
         "id": "MainProgram",
         "title": "The Program",
         "description": "The main program",
         "absent": "disallow",
         "staged_dir": "destdir"

In practice it probably won't be this simple, because it never is.

Wednesday, August 24, 2022

Random things on designing a text format for books

In previous blog posts there was some talk about implementing a simple system that generates books (both PDF and ebook) from plain text input files. The main question for that is what the input format should be. Currently there are basically two established formats: LaTeX and Markdown. The former is especially good if the book has a lot of figures, cross references, indexes and all that. The latter is commonly used in most modern web systems but it is more suitable to specifying text in the "web page" style as opposed to "split aesthetically over pages".

The obvious solution when faced with this issue is to design your own file format that fits your needs perfectly. I did not do that, but instead I did think about the issue and did some research and thinking. This is the outcome of that. It is not a finished product, you can think of instead as a grouping of unrelated things and design requirements that you'd need to deal with when creating such a file format.


The file format should be used to create traditional novel like Lord of the Rings and The Hitch-Hiker's Guide to the Galaxy. The output will be a "single flow" of text separated by chapter headings and the like. There needs to be support for different paragraph styles for printing things like poems, telegraphs or computer printouts in different fonts and indents.

The input files must be UTF-8 plain text in a format that works natively with revision control systems.

Supporting pictures and illustrations should be possible.

You need to be able to create both press-ready PDFs and epubs directly from the input files without having to reformat the text with something like Scribus.

Don't have styling information inside the input files. Those should be defined elsewhere, for example when generating the epub, all styling should come from a CSS file that the end user writes by hand. The input text should be as "syntax-only" as possible.

Writing HTML or XML style tags is right out.

Specifying formatting inline

Both LaTeX and Markdown specify their style information inline. This seems like a reasonable approach that people are used to. In fact I have seen Word documents written by professional proofreaders that do not use Word's styling at all but instead type Markdown-style formatting tokens inside Word documents.

The main difference between LaTeX and Markdown is that the former is verbose whereas the latter is, well, not as simple as you'd expect. The most common emphasis style is italic. The LaTeX way of doing it is to write \emph{italic} whereas Markdown requires it to be written as _italic_. This is one of the main annoying things about the LaTeX format, you need to keep typing that emph (or create a macro for it) and it takes up a lot of space in your text. Having a shorthand for common operations, like Markdown does, seems like an usability win. Typing \longcommand for functionality that is only used rarely is ok.

These formatting tokens have their own set of problems. Here's something you can try in any web site that supports Markdown formatting (I used Github here). Write the same word twice: one time so that the middle of the word has styling tokens and a second time so that the entire word is emphasized.

Then click on the preview tab.

Whoops. One of the two of these has unexpected formatting. Some web editors even get this wrong. If you use a graphical preview widget and emphasize the middle of the word using ctrl-i, the editor shows it as emphasized but if you click on the text tab and then return to the preview tab it shows the underscore characters rather than italic text.

This might lead you to think that underscores within words (i.e. sequences of characters separated by whitespace) are ignored. So let's test that.

This is a simple test with three different cases. Let's look at the outcome.

Our hypothesis turned out incorrect. There are special cases where underscores within words are considered styling information and others where they should be treated as characters. It is left as an exercise to the reader to 1) determine when that is and 2) what is the rendering difference between the two lower lines in the image (there is one).

Fair enough. Surely the same applies for other formatting as well.

This turns into:

Nope, behavioural difference again! But what if you use double underscores?

Okay, looking good. Surely that will work:

Nope. So what can we learn from this? Basically that in-band signaling is error prone and you typically should avoid it because it will come back and bite you in the ass. Since the file format is UTF-8 we could sacrifice some characters outside basic ASCII for this use but then you get into the problem of needing to type them out with unusual keyboard combinations (or configure your editor to write them out when typing ctrl-i or ctrl-b).

Small caps

Small caps letters are often used in high quality typography. In LaTeX you get them with the \textsc command. Markdown does not support small caps at all. There are several discussion threads that talk about adding support for it to Markdown. To save you time, here is a condensed version of pretty much all of them:

"Can we add small caps to Markdown?"

"No, you don't need them."

"Yes I do."

"No you don't."

And so on. Small caps might be important enough to warrant its own formatting character as discussed in the previous chapter and the implementation would have the same issues.

The dialogue clash

There are many different ways of laying out dialogue. Quotation marks are the most common but starting a paragraph with a dash is also used (in Finnish at least, this might be culture dependent). Like so:

– Use the Force Luke, said Obi-Wan.

Thus it would seem useful to format all paragraphs that start with a dash character as dialogue. In this example the actual formatting used an en-dash. If you want to go the Markdown way this is problematic, because it specifies that lines starting with dashes turn into bulleted lists:

  • Use the Force Luke, said Obi-Wan.
These are both useful things and you'd probably want to support both, even though the latter is not very common in story-oriented books. Which one should use the starting dash? I don't have a good answer.

Saturday, August 13, 2022

Making decision without all the information is tricky, a case study

In a recent blog post, Michal Catanzaro wrote about choosing proper configurations for your build, especially the buildtype attribute. As noted in the text, Meson's build type setup is not the greatest in the world., so I figured I'd write why that is, what would a better design look like and why we don't use that (and probably won't for the foreseeable future).

The concept of build types was copied almost directly from CMake. The main thing that they do is to set compiler flags like -g and -O2. Quite early in the development process of Meson I planned on adding top level options for debug info and optimization but the actual implementation for those was done much later. I copied the build types and flags almost directly except for build types RelWithDebInfo and Release. Having these two as separate build types did not make sense to me, because you always need debug info for releases. If you don't have it, you can't debug crash dumps coming from users. Thus I renamed them to debugoptimized and release.

So far so good, except there was one major piece of information I was missing. The word "debug" has two different meaning. On most platforms it means "debug info" but on Windows (or, specifically, with the MSVC toolchain) "debug" means a special build type that uses the "debug runtime" that has additional runtime checks that are useful during development. More info can be found e.g. here. This made the word "debug" doubly problematic. Not only do people on Windows want it to refer to the debug runtime but then some (but not all) people on Linux think that "debugoptimized" means that it should only be used during development. Originally that was not the case, it was supposed to mean "build a binary with the default optimizations and debug info". What I originally wanted was that distros would build packages with buildtype set to debugoptimized as opposed to living in the roaring 90s, passing a random collection of flags via CFLAGS and hoping for the best.

How it should have been done?

With the benefit of hindsight a better design is fairly easy to see. Given that Meson already has toggles for all individual bits, buildtype should describe the "intent", that is, what the end result will be used for. Its possible values should have the following:

  • development
  • releaseperf (maximize performance)
  • releasesize (minimize size)
It might also contain the following:

  • distro (when building distro packages)
  • fuzzing
Note that the word "debug" does not appear. This is intentional, all words are chosen so that they are unambiguous. If they are not, then they would need to be changed. The value of this option would be orthogonal to other flags. For example you might want to have a build type that minimizes build size but still uses -O2, because sometimes it produces smaller code than -Os or -Oz. Suppose you have two implementations of some algorithm: one that has maximal performance and another that yields less code. With this setup you could select between the two based on what the end result will be used rather than trying to guess it from optimization flags. (Some of this you can already do, but due to issues listed in Michael's blog it is not as simple.)

Can we switch to this model?

This is very difficult due to backwards compatibility. There are a ton of existing projects out there that depend on the way things are currently set up and breaking them would lead to a ton of unhappy users. If Meson had only a few dozen users I would have done this change already rather than writing this blog post.