Tuesday, December 16, 2025

An Appeal from the United Federation of Dictators, Despots, Evil Emperors and Tyrants

Truly, we are living in a new Golden Age for all those sharing our passion in subjugating all of human race under a single iron fist.

And to think that mere few decades ago we thought that our way of life was heading to the dung heap of humanity. Education, international cooperation and other such scourges of democracy and civilization seemed to have taken an unescapable stranglehold on our core values and, by extension, our future. But then, our savior appeared from nowhere in the form of His Holiness Steve Jobs. The vision and tireless unpaid overworking of his minions gave birth to the Squircle of Self-Subjugation and the world has never been the same since. May the glory of your achievements, oh Steve the Seer, shine forevermore throughout the four rounded corners of our planet!

Now, to be sure, many among our ranks were very sceptical of the product at first glance. Giving people the power to access all the information in the world wherever they may be seemed like the final blow to our cause. Yet, it became our greatest triumph. It did not take long to see that people who had formerly acted on injustices they perceived in the world switched to clicking on the thumbs up button on a Facebook post advocating some change and then promptly forgetting about it. Even this would have been master level brainwashing, but nowadays the unwashed masses do not feel the need to do even that. Instead they are watching an unending stream of five second nonsense videos that leave them incapable of thinking rationally. This is just like ye good olde roman times of Panem et circenses, except that bread does not need to be given out. People will buy it themselves at outrageous prices and then claim that giving out free bread to the starving supports terrorism. These spontaneous acts of hatred are what makes a tyrant's heart run aflutter with joy.

But that was only the beginning. Just as Steve gave us legs so we could run, Sam Altman gave us wings to take flight. His unsurpassed vision can not be praised highly enough. While the rest of us were trying to destroy democracy using the tried tools of our trade: war, forgery, drug trafficking, genocide, abolishing free speech (up to and including parody and satire), he went further than any of us could even dream of. He sought to destroy reality itself. Thus far he has been succeeding on every front and we salute him, for reality is highly problematic for us in the despot business. Reality affects people in weird ways. It warps them. It makes them want to do things they think are correct and to fix things they deem unjust instead of obeying our orders without questioning them. Because of this, reality must go.

And indeed it has. An ever growing number of people are now incapable of making any decisions on their own. Instead they ask a machine, a higher authority, what to do and follow the given advice to the letter. Even if it conflicts with the reality they see with their own eyes. The thing we had already deemed impossible is now routine. The knock-on effects of this generative AI technology should not be dismissed either. There are currently tens of millions of people working as our allies across all layers of western society to bring it crashing down. Rather than doing work, they just use generative AI to pretend to work. This gives them two unbeatable advantages. A) They don't have do anything, but instead can spend more time getting their ego stroked in social bubble media. B) They keep getting paid as if they were still working. True, these kinds of people have always existed, but, with a single swift stroke of Sam's silk-clad sophisticatorix, the do-not-give-a-shitters are now the majority. Those who care will either collapse under load or quit out of frustration. The collapse of democratic institutions is guaranteed. The only negative side is a lack of accomplishment: it was all too easy.

Unfortunately institutions can be rebuilt. Fortunately that has already been prevented. The use of AI to cheat has spread through the entire schooling system like wildfire. Teaching the next generation to avoid doing any work, taking responsibility or questioning authority is the most important thing we, the destroyers of free thinking, individuality and people's sense of self-worth, can do. For the first time since the invention of basic education, this issue is now in good hands. There are even people who seriously, and loudly, hold the opinion that school should be fundamentally altered. Instead of propagating knowledge or thinking, it should only train children to use ChatGPT. To this we say an unequivocal yes! The future shall have no revolutions, only blind obedience.

Alas, all is not well in this, our new utopia. In recent times many of us have noticed a marked decrease in our servants' work ethics. Whether it be incorrectly cooked eggs at breakfast to poorly cleaned offices, the lack of quality and finesse is palpable. What is even the point of manufacturing a massive statue of yourself on the capital's main square if the gilding is flaking off it during the inauguration ceremony? Or to build the world's biggest eight lane highway bridge only to have it collapse before a single car has crossed it as nobody actually cared to do the load bearing computations properly? The only way to get people to do anything nowadays is to threaten them with a firing squad. This toxic work environment brings about severe psychological stress. Contrary to common belief, tyrants are human too. Having to threaten ten different people with execution before lunch is exhaustive and may even cause mental health issues for the caring Great Leader. This is, of course, as unacceptable as it is inhumane.

Thus we come to the core of this declaration. While we feel that social media, AI, cell phones and other technological tools of totalitarianism are useful and mandatory for the modern dictator, their use has spread too far. They destroy too many of the things we hold dear. There are things too important to be left to the whims of plebs who neither care nor understand what they are supposed to be doing. Thus we will be specifying a list of occupations and tasks where the use of these tools shall be prohibited and we encourage all totalitarian countries to do the same. As dictatorships are not, by definition, under free market pressure to maximize profit regardless of consequences, we have the luxury of being able to invest sensibly. That is how we win. Of course we would prefer not to have to resort to these measures, but given the current geopolitical situation, it is necessary.

Finally, to not end on a downer, we would like to extend our thanks to our business partners. As you know, bending the entire humankind to the ground for an iron boot stomp on their face forever is not something you can do on your own. It is a team effort. We extend our most heartfelt gratitude to all our allies. Thank you Tim Cook, Mark Zuckerberg, Peter Thiel, Sundar Pichai, Jeff Bezos, Satya Nadella and all the rest. You are our Most Valuable Players. Without you, this would not have been possible. None of you were asked to do this. You stood up voluntarily and chose to take control of the ignorant masses, as any good despot should. Generations upon generations of children shall be marched to pay tribute to your portraits every morning of every month of every year. Which probably is, if we may be so bold as to speculate, the thing you wanted all along.

Yours sincerely in enslavement,

  • Idi Amin
  • Nicolae CeauČ™escu
  • Kim Il-Sung and family
  • Augusto Pinochet
  • Pol Pot
  • Ranavalona I
  • Josef Stalin
  • Mao Zedong

Monday, November 24, 2025

3D models in PDF documents

PDF can do a lot of things. One them is embedding 3D models in the file and displaying them. The user can orient them freely in 3D space and even choose how they should be rendered (wireframe, solid, etc). The main use case for this is engineering applications.

Supporting 3D annotations is, as expected, unexpectedly difficult because:

  1. No open source PDF viewer seems to support 3D models.
  2. Even though the format specification is available, no open source software seems to support generating files in this format (by which I mean Blender does not do it by default). [1]
But, again, given sufficient effort and submitting data to not-at-all-sketchy-looking 3D model conversion web sites, you can get 3D annotations to work. Almost.

As you can probably tell, the picture above is not a screenshot. I had to take it with a cell phone camera, because while Acrobat Reader can open the file and display the result, it hard crashes before you can open the Windows screenshot tool. 

[1] Update: apparently KiCad nightly can export U3D files that can be used in PDFs.

Thursday, November 13, 2025

Creating valid PDF/A-4 with CapyPDF

PDF/A is a specific version of PDF designed for long term archival of electronic data. The idea being that PDF/A files are both self contained and fully specified, so they can be opened in the future without any loss of fidelity.

Implementing PDF/A export is complicated by the fact that the specification is an ISO standard, which is not publicly available. Fortunately, there are PDF/A validators that will tell you if (and sometimes how) your generated PDF/A is invalid. So, given sufficient patience, you can keep throwing PDF files at the validator, fixing the issues reported and repeating this loop over and over until validation passes. Like this:

This will be available in the next release of CapyPDF.

Tuesday, October 21, 2025

CapyPDF 1.8.0 released

I have just released CapyPDF 1.8. It's mostly minor fixes and tweaks but there are two notable things. The first one is that CapyPDF now supports variable axis fonts. The other one is that CapyPDF will now produce PDF version 2.0 files instead of 1.7 by default. This might seem like a big leap but really isn't. PDF 2.0 is pretty much the same as 1.7, just with documentation updates and deprecating (but not removing) a bunch of things. People using PDF have a tendency to be quite conservative in their versions, but PDF 2.0 has been out since 2017 with most of it being PDF 1.7 from 2008.

It is still possible to create version with older PDF specs. If you specify, say, PDF/X3, CapyPDF will output PDF 1.3 as the spec requires that version and no other even though, for example, Adobe's PDF tools accept PDF/X3 whose version later than 1.3.

The PDF specification is currently undergoing major changes and future versions are expected to have backwards incompatible features such as HDR imaging. But 2.0 does not have those yet.

Things CapyPDF supports

CapyPDF has implemented a fair chunk of the various PDF specs:

  • All paint and text operations
  • Color management
  • Optional content groups
  • PDF/X and PDF/A support
  • Tagged PDF (i.e. document structure and semantic information)
  • TTF, OTF, TTC and CFF fonts
  • Forms (preliminary)
  • Annotations
  • File attachments
  • Outlines
  • Page naming
In theory this should be enough to support things like XRechnung and documents with full accessibility information as per PDF/UA. These have not been actually tested as I don't have personal experience in German electronic invoicing or document accessibility.

Wednesday, October 15, 2025

Building Android apps with native code using Meson

Building code for Android with Meson has long been possible, but a bit hacky and not particularly well documented. Recently some new features have landed in Meson main, which make the experience quite a bit nicer. To demonstrate, I have updated the Platypus sample project to build and run on Android. The project itself aims demonstrate how you'd build a GUI application with shared native code on multiple platforms using native widget toolkits on each of them. Currently it supports GTK, Win32, Cocoa, WASM and Android. In addition to building the code it also generates native packages and installers.

It would be nice if you could build full Android applications with just a toolchain directly from the command line. As you start looking into how Android builds work you realize that this is not really the way to go if you want to preserve your sanity. Google has tied app building very tightly into Android Studio. Thus the simple way is to build the native code with Meson, Java/Kotlin code with Android Studio and then merge the two together.

The Platypus repo has a script called build_android.py, which does exactly this. The steps needed to get a working build are the following:

  1. Use Meson's env2mfile to introspect the current Android Studio installation and create cross files for all discovered Android toolchains
  2. Set up a build directory for the toolchain version/ABI/CPU combination given, defaulting to the newest toolchain and arm64-v8a
  3. Compile the code.
  4. Install the generated shared library in the source tree under <app source dir>/jniLibs/<cpu>.
  5. Android Studio will then automatically install the built libs when deploying the project.

Here is a picture of the end result. The same application is running both in an emulator (x86_64) and a physical device (arm64-v8a).

The main downside is that you have to run the native build step by hand. It should be possible to make this a custom build step in Gradle but I've never actually written Gradle code so I don't know how to do it.

Sunday, September 28, 2025

In C++ modules globally unique module names seem to be unavoidable, so let's use that fact for good instead of complexshittification

Writing out C++ module files and importing them is awfully complicated. The main cause for this complexity is that the C++ standard can not give requirements like "do not engage in Vogon-level stupidity, as that is not supported". As a result implementations have to support anything and everything under the sun. For module integration there are multiple different approaches ranging from custom on-the-fly generated JSON files (which neither Ninja nor Make can read so you need to spawn an extra process per file just to do the data conversion, but I digress) to custom on-the-fly spawned socket server daemons that do something. It's not really clear to me what.

Instead of diving to that hole, let's instead approach the problem from first principles from the opposite side.

The common setup

A single project consists of a single source tree. It consists of a single executable E and a bunch of libraries L1 to L99, say. Some of those are internal to the project and some are external dependencies. For simplicity we assume that they are embedded as source within the parent project. All libraries are static and are all linked to the executable E.

With a non-module setup each library can have its own header/source pair with file names like utils.hpp and utils.cpp. All of those can be built and linked in the same executable and, assuming their symbol names won't clash, work just fine. This is not only supported, but in fact quite common.

What people actually want going forward

The dream, then, is to convert everything to modules and have things work just as they used to.

If all libraries were internal, it could be possible to enforce that the different util libraries get different module names. If they are external, you clearly can't. The name is whatever upstream chooses it to be. There are now two modules called utils in the build and it is the responsibility of someone (typically the build system, because no-one else seems to want to touch this) to ensure that the two module files are exposed to the correct compilation commands in the correct order.

This is complex and difficult, but once you get it done, things should just work again. Right?

That is what I thought too, but that is actually not the case. This very common setup does not work, and can not be made to work. You don't have to take my word for it, here is a quote from the GCC bug tracker:

This is already IFNDR, and can cause standard ODR-like issues as the name of the module is used as the discriminator for module-linkage entities and the module initialiser function.  Of course that only applies if both these modules get linked into the same executable;

IFNDR (ill-formed, no diagnostic required) is a technical term for "if this happens to you, sucks to be you". The code is broken and the compiler is allowed to whatever it wants with it.

What does it mean in practice?

According to my interpretation of thiscomment (which, granted, might be incorrect as I am not a compiler implementer) if you have an executable and you link into it any code that has multiple modules with the same name, the end result is broken. It does not matter how the same module names get in, the end result is broken. No matter how much you personally do not like this and think that it should not happen, it will happen and the end result is broken.

At a higher level this means that this property forms a namespace. Not a C++ namespace, but a sort of a virtual name space. This contains all "generally available" code, which in practice means all open source library code. As that public code can be combined in arbitrary ways it means that if you want things to work, module names must be globally unique in that set of code (and also in every final executable). Any duplicates will break things in ways that can only be fixed by renaming all but one of the clashing modules.

Globally unique modules names is thus not a "recommendation", "nice to have" or "best practice". It is a technical requirement that comes directly from the compiler and standard definition.

The silver lining

If we accept this requirement and build things on top of it, things suddenly get a lot simpler. The build setup for modules reduces to the following for projects that build all of their own modules:

  • At the top of the build dir is a single directory for modules (GCC already does this, its directory is called gcm.cache)
  • All generated module files are written in that directory, as they all have unique names they can not clash
  • All module imports are done from that directory
  • Module mappers and all related complexity can be dropped to the floor and ignored
Importing modules from the system might take some more work (maybe copy Fortran and have a -J flag for module search paths). However at the time of writing GCC and Clang module files are not stable and do not work between different compiler versions or even when compiler flags differ between export and import. Thus prebuilt libraries can not be imported as modules from the system until that is fixed. AFAIK there is no timeline for when that will be implemented.

So now you have two choices:

  1. Accept reality and implement a system that is simple, reliable and working.
  2. Reject reality and implement a system that is complicated, unreliable and broken.
[Edit, fixed quote misattribution.]

Saturday, September 6, 2025

Trying out import std

Since C++ compilers are starting to support import std, I ran a few experiments to see what the status of that is. GCC 15 on latest Ubuntu was used for all of the following.

The goal

One of the main goals of a working module implementation is to be able to support the following workflow:

  • Suppose we have an executable E
  • It uses a library L
  • L and E are made by different people and have different Git repositories and all that
  • We want to take the unaltered source code of L, put it inside E and build the whole thing (in Meson parlance this is known as subproject)
  • Build files do not need to be edited, i.e. the source of L is immutable
  • Make the build as fast as reasonably possible

The simple start

We'll start with a helloworld example.

This requires two two compiler invocations.

g++-15 -std=c++26 -c -fmodules -fmodule-only -fsearch-include-path bits/std.cc

g++-15 -std=c++26 -fmodules standalone.cpp -o standalone

The first invocation compiles the std module and the second one uses it. There is already some wonkiness here. For example the documentation for -fmodule-only says that it only produces the module output, not an object file. However it also tries to link the result into an executable so you have to give it the -c argument to tell it to only create the object file, which the other flag then tells it not to create.

Building the std module takes 3 seconds and the program itself takes 0.65 seconds. Compiling without modules takes about a second, but only 0.2 seconds if you use iostream instead of println.

The module file itself goes to a directory called gcm.cache in the current working dir:

All in all this is fairly painless so far.

So ... ship it?

Not so fast. Let's see what happens if you build the module with a different standards version than the consuming executable.

It detects the mismatch and errors out. Which is good, but also raises questions. For example what happens if you build the module without definitions but the consuming app with -DNDEBUG. In my testing it worked, but is it just a case of getting lucky with the UB slot machine? I don't know. What should happen? I don't know that either. Unfortunately there is an even bigger issue lurking about.

Clash of File Name Clans

If you are compiling with Ninja (and you should) all compiler invocations are made from the same directory (the build tree root). GCC also does not seem to provide a compiler flag to change the location of the gcm.cache directory (or at least it does not seem to be in the docs). Thus if you have two targets that both use import std, their compiled modules get the same output file name. They would clobber each other, so Ninja will refuse to build them (Make probably ignores this, so the files end up clobbering each other and, if you are very lucky, only causes a build error).

Assuming that you can detect this and deduplicate building the std module, the end result still has a major limitation. You can only ever have one standard library module across all of your build targets. Personally I would be all for forcing this over the entire build tree, but sadly it is a limitation that can't really be imposed on existing projects. Sadly I know this from experience. People are doing weird things out there and they want to keep on weirding on. Sometimes even for valid technical reasons.

Even if this issue was fixed, it does not really help. As you could probably tell, this clashing will happen for all modules. So if your ever has two modules called utils, no matter where they are or who wrote them, they will both try to write gcm.cache/utils.gcm and either fail to build, fail on import or invoke UB.

Having the build system work around this by changing the working directory to implicitly make the cache directory go elsewhere (and repoint all paths at the same time) is not an option. All process invocations must be doable from the top level directory. This is the hill I will die on if I must!

Instead what is needed is something like the target private directory proposal I made ages ago. With that you'd end up with command line arguments roughly like this:

g++-15 <other args> --target-private-dir=path/to/foo.priv --project-private-dir=toplevel.priv

The build system would guarantee that all compilations for a single target (library, exe, etc) have the same target private directory and all compilations in the build tree get the same top level private directory. This allows the compiler to do some build optimizations behind the scenes. For example if it needs to build a std module, it could copy it in the top level private directory and other targets could copy it from there instead of building it from scratch (assuming it is compatible and all that).