Wednesday, May 13, 2020

The need for some sort of a general trademark/logo license

As usual I am not a lawyer and this is not legal advice.

The problem of software licenses is fairly well understood and there are many established alternatives to choose from based on your needs. This is not the case for licenses governing assets such as images and logos and especially trademarks. Many organisations, such as Gnome and the Linux Foundation have their own trademark policy pages, but they seem to be tailored to those specific organizations. There does not seem to be a kind of a "General project trademark and logo license", for lack of a better term, that people could apply to their projects.

An example case: Meson's logo

The Meson build system's name is a registered trademark. In addition it has a logo which is not. The things we would want to do with it include:
  • Allow people to use to logo when referring to the project in an editorial fashion (this may already be a legal right regardless, in some jurisdictions at least, but IANAL and all that)
  • Allow people to use the logo in other applications that integrate with Meson, in particular IDEs should be able to use it in their GUIs
  • People should be allowed to change the image format to suit their needs, logos are typically provided as SVGs, but for icon use one might want to use PNG instead
  • People should not be allowed to use the logos and trademarks in a way that would imply they are endorsing any particular product or service
  • People should not be allowed to create and sell third party merchandising (shirts etc) using the logo
  • Achieve all this while maintaining compliance with DFSG, various corporate legal requirements, established best practices for trademark protection and all that.
Getting all of these at the same time is really hard. As an example the Creative Commons licenses can be split in two based on whether they permit commercial use. All those that do permit it fail because they (seem to) permit the creation and sales of third party merchandise. Those that prohibit commercial are problematic because they prevent companies from shipping a commercial IDE product that uses the logo to identify Meson integration (which is something we do want to support, that is what a logo is for after all). This could also be seen as discriminating against certain fields of endeavour, which is contrary to things like the GPL's freedom zero and DFSG guideline #6.

Due to this the current approach we have is that logo usage requires individual permission from me personally. This is an awful solution, but since I am just a random dude on the Internet with a lawyer budget of exactly zero, it's about the only thing I can do. What would be great is if the entities who do have the necessary resources and expertise would create such a license and would then publish it freely so FOSS projects could just use it just as easily as picking a license for their code.

Monday, May 11, 2020

Enforcing locking with C++ nonmovable types

Let's say you have a struct with some variable protected by a mutex like this:

struct UnsafeData {
  int x;
  std::mutex ;
};

You should only be able to change x when the mutex is being held. A typical solution is to make x private and then create a method like this:

void UnsafeData::set_x(int newx) {
  // WHOOPS, forgot to lock mutex here.
  x = newx;
}

It is a common mistake that when code is changed, someone, somewhere forgots to add a lock guard. The problem is even bigger if the variable is a full object or a handle that you would like to "pass out" to the caller so they can use it outside the body of the struct. This caller also needs to release the lock when it's done.

This brings up an interesting question: can we implement a scheme which only permits safe accesses to the variables in a way that the users can not circumvent [0] and which has zero performance penalty compared to writing optimal lock/unlock function calls by hand and which uses only standard C++?

Initial approaches

The first idea would be to do something like:

int& get_x(std::lock_guard<std::mutex> &lock);

This does not work because the lifetimes of the lock and the int reference are not enforced to be the same. It is entirely possible to drop the lock but keep the reference and then use x without the lock by accident.

A second approach would be something like:

struct PointerAndLock {
  int *x;
  std::lock_guard<std::mutex> lock;
};

PointerAndLock get_x();

This is better, but does not work. Lock objects are special and they can't be copied or moved so for this to work the lock object must be stored in the heap, meaning a call to new. You could pass that in as an out-param but those are icky. That would also be problematic in that the caller creates the object uninitialised, meaning that x points to garbage values (or nullptr). Murpy's law states that sooner or later one of those gets used incorrectly. We'd want to make these cases impossible by construction.

The implementation

It turns out that this has not been possible to do until C++ added the concept of guaranteed copy elision. It means that it is possible to return objects from functions via neither copy or a move. It's as if they were automatically created in the scope of the calling function. If you are interested in how that works, googling for "return slot" should get you the information you need.  With this the actual implementation is not particularly complex. First we have the data struct:

struct Data {
    friend struct Proxy;
    Proxy get_x();

private:
    int x;
    mutable std::mutex m;
};

This struct only holds the data. It does not manipulate it in any way. Every data member is private, so the struct itself and its Proxy friend can poke them directly. All accesses go via the Proxy struct, whose implementation is this:

struct Proxy {
    int &x;

    explicit Proxy(Data &input) : x(input.x), l(input.m) {}

    Proxy(const Proxy &) = delete;
    Proxy(Proxy &&) = delete;
    Proxy& operator=(const Proxy&) = delete;
    Proxy& operator=(Proxy &&) = delete;


private:
    std::lock_guard<std::mutex> l;
};

This struct is not copyable or movable. Once created the only things you can do with it are to access x and to destroy the entire object. Thanks to guaranteed copy elision, you can return it from a function, which is exactly what we need.

The creating function is simply:

Proxy Data::get_x() {
    return Proxy(*this);
}

Using the result feels nice and natural:

void set_x(Data &d) {
    // d.x = 3 does not compile
    auto p = d.get_x();
    p.x = 3;
}

This has all the requirements we need. Callers can only access data entities when they are holding the mutex [1]. They do not and in deed can not release the mutex accidentally because it is marked private. The lifetime of the variable is tied to the life time of the lock, they both vanish at the exact same time. It is not possible to create half initialised or stale Proxy objects, they are always valid. Even better, the compiler produces assembly that is identical to the manual version, as can be seen via this handy godbolt link.

[0] Unless they manually reinterpret cast objects to char pointers and poke their internals by hand. There is no way to prevent this.

[1] Unless they manually create a pointer to the underlying int and stash it somewhere. There is no way to prevent this.

Monday, May 4, 2020

Let's talk meta

In my previous blog post about old techs causing problems in getting new developers on board. In it I had the following statement:
As a first order approximation, nobody under the age of 35 knows how to code in Perl, let alone would be willing to sacrifice their free time doing it.
When I wrote this, I spent a lot of time thinking whether I should add a footnote or extra sentence saying, roughly, that I'm not claiming that there are no people under 35 who know Perl, but that it is a skill that has gotten quite rare compared to ye olden times. The reason for adding extra text is that I feared that someone would inevitably come in and derail the discussion with some variation of "I'm under 35 and I know Perl, so the entire post is wrong".

In the end I chose not to put the clarification in the post. After all it was written slightly tongue-in-cheek, and even specifically says that this is not The Truth (TM), but just an approximation. The post was published. It got linked on a discussion forum. One of the very first comments was this:


This is what makes blogging on the Internet such a frustrating experience. Every single sentence you write has to be scrutinised from all angles and then padded and guarded so its meaning can not be sneakily undermined in this way. This is tiring, as it is difficult to get a good writing flow going. It may also make the text less readable and enjoyable. It makes blogging less fun and thus people less likely to want to do it.

An alternative to this is to not ready any comments. This works, but then you are flying blind. You can't tell what writing is good and which is not and you certainly can't improve. The Internet has ruined everything.

Meta-notes

Contrary to the claim made above, the Internet has not, in fact, ruined everything. The statement is hyperbole, stemming from the author's feelings of frustration. In reality the Internet has improved the quality of life of most people on the earth by a tremendous amount and should be considered as one of the greatest inventions of mankind.

"Ye olden times" was not written as "├że olden times" because in the thorny battle between orthographic accuracy and readability the latter won.

The phrase "flying blind" refers neither to actual flying nor to actual blindness. It is merely a figure of speech for any behaviour that is done in isolation without external feedback. You should never operate any vehicle under any sort of vision impairment unless you have been specifically trained and authorized to do so by the appropriate authorities.

Meta-meta-notes

The notes above were not written because the author thought that readers would take the original statements literally. Instead they are there to illustrate what would happen if the defensive approach to writing, as laid out in the post, were taken to absurd extremes. It exists purely for the purposes of comedy. As does this chapter.

Saturday, May 2, 2020

You have to kill your perlings

Preface

This blog post deals only with the social and "human" aspects of various technologies. It is not about the technical merits of any programming language or other such tech. If you intend to write a scathing Reddit comment along the lines of "this guy is an idiot, Perl is a great language for X because you can do Y, Z and W", please don't. That is not the point of this post. Perl was chosen as the canonical example mostly due to its prevalence, the same points apply for things like CORBA, TCL, needing to write XML files by hand, ridiculously long compilation times and so on.

What is the issue at hand?

The 90s and early 2000s a lot of code was written. As was fashionable at the time, a lot of it was done using Perl. As open source projects are chronically underfunded, a lot of that code is still running. In fact a lot of the core infrastructure of Debian, the BSDs and other such foundational projects is written in Perl. When told about this, many people give the "project manager" reply and say that since the code exists, works and is doing what it should, everything is fine. But it's really not, and to find out why, let's look at the following graph.

Graph of number of people capable and willing to work on Perl. The values peak at 2000 and plummet to zero by 2020.

As we can see the pool of people willing to work on Perl projects is shrinking fast. This is a major problem for open source, since a healthy project requires a steady influx of new contributors, developers and volunteers. As a first order approximation, nobody under the age of 35 knows how to code in Perl, let alone would be willing to sacrifice their free time doing it.

One could go into long debates and discussions about why this is, how millennials are killing open source and how everyone should just "man up" and start writing sigils in their variable names. It would be pointless, though. The reasons people don't want to do Perl are irrelevant, the only thing that matters is that the use of Perl is actively driving away potential project contributors. That is the zeitgeist. The only thing you can do is to adapt to it. That means migrating your project from Perl to something else.

But isn't that risky and a lot of work?

Yes. A lot of Perl code is foundational. In many cases the people who wrote it have left and no-one has touched it in years. Changing it is risky. No matter how careful you are, there will be bugs. Nasty bugs. Hard to trace bugs. Bugs that work together with other bugs to cancel each other out. It will be a lot of hard work, but that is the price you have to pay to keep your project vibrant.

An alternative is to do nothing. If your project never needs to change, then this might be a reasonable course of action. However if something happens and major changes are needed (and one thing we have learned this year is that unexpected things actually do happen) then you might end up as the FOSS equivalent of the New Jersey mayor trying to find people to code COBOL for free.

Sunday, April 19, 2020

Do humans or compilers produce faster code?

Modern optimizing compilers are truly amazing. They have tons and tons of tricks to make even crappy code run incredibly fast. Faster than most people could write by hand. This has lead some people to claim that program optimization is something that can be left to compilers, as they seem to be a lot better at it. This usually sparks a reply from people on the other end of the spectrum that say that they can write faster code by hand than any compiler which makes compilers mostly worthless when performance actually matters.

In a way both of these viewpoints are correct. In another way they are both wrong. To see how, let's split this issue into two parts.

A human being can write faster code than a compiler for any given program

This one is fairly easy to prove (semi)formally. Suppose you have a program P written in some programming language L that runs faster than any hand written version. A human being can look at the assembly output of that program and write an equivalent source version in straight C. Usually when doing this you find some specific optimization that you can add to make the hand written version faster.

Even if the compiler's output were proved optimal (such as in the case of superoptimization), it can still be matched by copying the output into your own program as inline assembly. Thus we have proven that for any program humans will always be faster.

A human being can not write faster code than a compiler for every program

Let's take something like Firefox. We know from the previous chapter that one could eschew complex compiler optimizations and rewrite it in straight C or equivalent and end up with better performance. The downside is that you would die of old age before the task would be finished.

Human beings have a limited shelf life. There are only so many times they can press a key on the keyboard until they expire. Rewriting Firefox to code that works faster with straight C than the current version with all optimizations enabled is just too big of a task.

Even if by some magic you could do this, during the rewrite the requirements on the browser would change. A lot. The end result would be useless until you add all the new functionality that was added since then. This would lead to eternal chasing of the tail lights of the existing project.

And even if you could do that, optimizing compilers keep getting better, so you'd need to go through your entire code base regularly and add the same optimizations by hand to keep up. All of these things could be done in theory, but they completely impossible in practice.

The entire question poorly posed

Asking whether compilers and humans write faster code is kind of like asking which one is "bluer", the sea or the sky. Sure you could spend years debating the issue on Twitter without getting anywhere, but it's not particularly interesting. A more productive way is to instead ask the question "given the requirements, skills and resources I have available, should I hand-optimize this particular routine or leave it to the compiler".

If you do this you rediscover the basic engineering adage: you get the best bang for the buck by relying on the compiler by default and doing things by hand only for bottlenecks that are discovered by profiling the running application.

PS. Unoptimized C is slow, too

Some people think that when they write C it is "very close" to the underlying assembly and thus does not benefit much from compiler optimizations. This has not been true for years (possibly decades). The performance difference between no optimization and -O2 can be massive, especially for hot inner loops.

When people say that they can write code that is faster than compiler optimized version of the same algorithm in some other language, that is not what they are actually saying. Unless they are writing 100% pure ASM by hand [0] that is not what they are saying. Instead they are saying "I can take any algorithm implementation, write it with an alternative syntax and, when both of these are run through their respective optimizing compilers, end up with a program that runs faster".

[0] Which does happen sometimes, especially for SIMD code.

Saturday, April 11, 2020

Your statement is 100% correct but misses the entire point

Let's assume that there is a discussion going on on the Internet about programming languages. One of the design points that come up is a garbage collector. One participant mentions the advantages of garbage collection with something like this:
Garbage collectors are nice and save a lot of work. If your application does not have strict latency requirements, not having to care about memory management is liberating and can improve developer efficiency by a lot.
This is a fairly neutral statement that most people would agree with, even if they work on code that has strict real time requirements. Yet, inevitably, someone will present this counterpoint.
No! If you have dangling references memory is never freed and you have to fix that by doing manual memory management anyway. Garbage collectors do not magically fix all bugs.
If you read through the sentences carefully you'll notice that every asserted statement in it is true. That is what makes it so frustrating to argue against. Most people with engineering backgrounds are quite willing to admit they are wrong when presented with evidence that their statements are not correct. This does not cover everyone, of course, as some people are quite willing to violently disagree with any and all facts that are in conflict with their pre-held beliefs. We'll ignore those people for the purpose of this post.

While true, that single sentence ignores all of the larger context of the issue, which contains points like the following:

  • Dangling reference out of memory errors are rare (maybe 1 in 10 programs?) whereas regular memory bugs like use-after-free, double free, off by one errors etc are very common (100-1000 in every program).
  • Modern GCs have very good profilers, finding dangling references is a lot easier than debugging stack corruptions.
  • Being able create things on a whim and just drop them to the floor makes programmers a lot more productive than forcing them to micromanage the complete life cycle of every single resource.
  • Even if you encounter a dangling reference issue, fixing it probably takes less time than would have gone to fixing memory corruption issues in a GCless version of the same app.
In brief, the actual sentence is true but misses the entire point of the comment they are replying to. This is sadly common on Internet debates. Let's see some examples.

Computer security

A statement like this:
Using HTTPS on all web traffic is good for security and anonymity.
 might be countered with something like this:
That provides no real security, if the NSA want your data they will break into your apartment and get it.
This statement is again absolutely true. On the other hand if you are not the leader of a nation state or do regular business with international drug cartels, you are unlikely to be the target of a directed NSA offensive.

If you think that this is a stupid point that nobody would ever make, I agree with you completely. I have also seen it used in the real world. I wish I hadn't.

Bugs are caused by incompetents

High level programming languages are nice.
Programming languages that guard against buffer overruns is great for security and ease of development.
But not for everyone.
You can achieve the exact same thing in C, you just have to be careful.
This is again true. If every single developer on a code base is being 100% focused and 100% careful 100% of the time, then bug free code is possible.  Reality has shown time and time again that it is not possible, human beings are simply not capable of operating flawlessly for extended periods of time.

Yagni? What Yagni?

There's the simple.
Processing text files with Python is really nice and simple.
And not so simple.
Python is a complete joke, it will fail hard when you need to process ten million files a second on an embedded microcontroller using at most 2 k of RAM.
Yes. Yes it does. In that use case it would be the wrong choice. You are absolutely correct. Thank you for your insight, good sir, here is a shiny solid gold medal to commemorate your important contribution to this discussion.

What could be the cause of this?

The one thing that school trains you for is that being right is what matters. If you get the answers right in your test, then you get a good grade. Get them wrong and you don't. Maybe this frame of mind "sticks on" once you leave school, especially given that most people who post these kinds of comments seem to be from the "smarter" end of the spectrum (personal opinion, not based on any actual research). In the real world being right is not a merit by itself. In any debate being right is important, of course, but the much more important feature is being relevant. That requires understanding the wider context and possibly admitting that something that is the most important thing in the world to you personally, might be completely irrelevant for the issue at hand.

Being right is easy. Being relevant is extremely difficult.

Sunday, April 5, 2020

Meson manual sales status and price adjustment

The sales dashboard of the Meson manual currently looks like this.

It splits up quite nicely into three parts. The first one is the regular sales from the beginning of the year, which is on average less than one sale per day.

The second part (marked with a line) indicates when I was a guest on CppCast talking about Meson and the book. As an experiment I created a time limited discount coupon so that all listeners could buy it with €10 off. As you can tell from the graph it did have an immediate response, which again proves that marketing and visibility are the things that actually matter when trying to sell any product.

After that we have the "new normal", which means no sales at all. I don't know if this is caused by the coronavirus isolation or whether this is the natural end of life for the product (hopefully the former but you can never really tell in advance).

Price reduction

Thus, effective immediately, the price of the book has been reduced to €24.95. You can purchase it from the official site.

Thursday, March 26, 2020

It's not what programming languages do, it's what they shepherd you to

How many of you have listened, read or taken part in a discussion about programming languages that goes like the following:

Person A: "Programming language X is bad, code written in it is unreadable and horrible."

Person B: "No it's not. You can write good code in X, you just have to be disciplined."

Person A: "It does not work, if you look at existing code it is all awful."

Person B: "No! Wrong! Those are just people doing it badly. You can write readable code just fine."

After this the discussion repeats from the beginning until either one gets fed up and just leaves.

I'm guessing more than 99% of you readers have seen this, often multiple times. The sad part of this is that even though this thing happens all the time, nobody learns anything and the discussion begins anew all the time. Let's see if we can do something about this. A good way to go about it is to try to come up with a name and a description for the underlying issue.
shepherding An invisible property of a progamming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered "better" in some sense. These may include things like long term maintainability, readability and performance.
This is a bit abstract, so let's look at some examples.

Perl shepherds you into using regexps

Perl has several XML parsers available and they are presumably good at their jobs (I have never actually used one so I wouldn't know). Yet, in practice, many Perl scripts do XML (and HTML) manipulation with regexes, which is brittle and "wrong" for lack of a better term. This is a clear case of shepherding. Text manipulation in Perl is easy. Importing, calling and using an XML parser is not. And really all you need to do is to change that one string to a different string. It's tempting. It works. Surely it could not fail. Let's just do it and get on with other stuff. Boom, just like that you have been shepherded.

Note that there is nothing about Perl that forces you to do this. It provides all the tools needed to do the right thing. And yet people don't, because they are being shepherded (unconsciously) into doing the thing that is easy and fast in Perl.

Make shepherds you into embedding shell pipelines in Makefiles

Compiling code with Make is tolerable, but it fails quite badly when you need to generate source code, data files and the like. The sustainable solution would be to write a standalone program in a proper scripting language that has all the code logic needed and call that from Make with your inputs and outputs. This rarely happens. Instead people think "I know, I have an entire Unix userland available [1], I can just string together random text mangling tools in a pipeline, write it here and be done". This is how unmaintainability is born.

Nothing about Make forces people to behave like this. Make shepherds people into doing this. It is the easy, natural outcome when faced with the given problem.

Other examples

  • C shepherds you into manipulating data via pointers rather than value objects.
  • C++ shepherds you into providing dependencies as header-only libraries.
  • Java does not shepherd you into using classes and objects, it pretty much mandates them.
  • Turing complete configuration languages shepherd you into writing complex logic with them, even though they are usually not particularly good programming environments.
[1] Which you don't have on Windows. Not to mention that every Unix has slightly different command line arguments and semantics for basic commands meaning shell pipelines are not actually portable.

Wednesday, March 11, 2020

The character that swallowed a week

In the last few posts we have looked at compiling LibreOffice from scratch using Meson. Contrary to what one might expect it was not particularly difficult, just laborious. The codegen bits (yes, there are several) required some deciphering, but other than that it was fairly straightforward. Unfortunately just compiling source code is not sufficient, as usually one also wants to run the result. Further you'd want the new binaries to behave in the same way as the old ones. This is where things get interesting.

Trying to run the main LibreOffice application is not particularly useful because it will almost certainly not work. Fortunately LO provides a bunch of sample and test applications one can use. One of these is a demo app that starts, initialises the LO runtime and opens up a GUI window. Perfect. After a build (which takes an hour) and install the binary is ready to run and produces a … segfault. More annoyingly it produces a segfault without any useful information that would aid in debugging.

Time to take out strace. Several attempts later we discover that LO tries to be dynamic and smart. LO consists of >150 shared libraries. Rather than link all of them together, what seems to be happening is that the code tries to determine where the main executable is, then looks up shared libraries relative to that binary, dlopens them and then goes on its merry way. If this fails it crashes somewhere for some reason I chose not to dig into. This debugging brought up an interesting discovery about naming. One of the libraries is called SAL, so naturally you would expect the actual file to be called libsal.so. This is also what the Makefile defining the library calls it. But that is not its actual name. Instead it is is libsallo.so. Somewhere within the bowels of the 150 000 lines of Make that make (ha) up the system, some (but not all) library names get an lo appended to them. There does not seem to be any logic to which ones, but fine, they can at least be fixed with manual work.

After that the program failed trying to open some data files. This was easily fixed by swiping all binary files from an existing working installation. Then it failed with a weird assert failure that seemed to indicate that the runtime object system had not been properly initialised. LO is built around a custom object model called UNO or Universal Network Objects. As you could tell by the name it comes from the 90s and has the following properties:
  • It provides an object model very close to the one in Java
  • Absolutely everything is done with inheritance, and all functionality is at least three namespaces deep
  • Objects are defined with a custom language that is compiled to C++ headers using a self built tool (I'm told this generates roughly a million lines of C++ headers in total)
  • All class types and objects are constructed at runtime
  • The runtime is written in C++, but the actual implementation is all void pointers and reinterpret casts

Break all the tools!

At this point the easy avenues had been explored and it was time to bring out gdb. While editing build definition files is something you can do with a simple editor, debugging really requires an IDE. The command line interface, even the curses one, is cumbersome compared to a full fledged GUI with mouse hovers for variables and so on. Fortunately this is a simple task: create a new Eclipse project, build (taking an hour again), install, run and debug. It worked really nicely. For about a day.

Eventually Eclipse's code indexer crashed because it was doing a full reindexing operation. When you restart Eclipse after such a crash it will restart the indexing operation and promptly crash again. The eventual solution was to manually delete all code that is not absolutely necessary for the test app, such as Writer, Calc and the rest. This brought the working set small enough that it did not crash any more. And no, increasing the amount of memory allocated to Eclipse's JVM does not fix the issue.  It crashes even if given 8 gigs of memory. With this done the sequence of steps leading to the crash could be debugged:

  1. The runtime tries to initialise itself, fails and throws its own custom exception type.
  2. The exception is caught, ignored and discarded, followed by calling a custom error handling code.
  3. The code retrieves the exception via other methods, and then tries to introspect the object for further information.
  4. Introspecting the error object for the reason why runtime initialisation failed requires the runtime to be initialised and working. As it is not, things crash.
Interestingly if you change the code at step 2 to just grab the exception and print its error message, it works correctly. Hooray for frameworks, I guess.

After some fixes the issue was gone but a new one took its place. Somehow the code got into a loop where function A called B, which called C and so on for about 20 functions until something called back to function A. This lead to an eternal loop and stack exhaustion.

At this point we have two supposedly identical programs: one built with gbuild and one built with Meson. They should behave identically but don't. This gives us an approach to solve the problem: single step both programs in a debugger until their execution differs. When that happens we know where the bug is. In practice this is less simple. Because the code uses a ton of OO helper classes and templates, even a single function call can lead into many stack frames that you have to step through manually. These helpers go away when optimisations are enabled, but in this particular case they can't be used as they make debugging quite difficult. Even -Og changes the code too much to be useful. So no optimization it is and every time you change the optimization level to test it takes, you guessed it, an hour to build (or more if you try -O2).

Manually single stepping through code is the sort of tedious manual labor that humans are bad at. Fortunately gdb has a Python scripting interface. Thus one could write a script to run on each gdb that connects to a common server that orders them to single step and report their current PC location and then halt when they differ. This worked perfectly until the programs called into libc. For some reason the same calls into libc (specifically getenv) took a different amount of steps to finish so the programs desynched almost immediately. Fixing that seemed to take too much work so that idea was scrapped.

Manually single stepping through code is difficult because if you accidentally go too far, you have to start over again. Fortunately we now have rr, which allows you to step backwards in code execution as well. Recording a trace of one of the programs worked. The other program worked as well. Running both of them at the same time failed miserably for reasons that remained unclear.

Debuglander ][: The Sloggening

Nevertheless at this point I had two debugging aides: what actually should happen as an rr trace and what actually was happening in a live debugger. Now it was just a matter of finding out where their execution differs. After about two days of debugger stepping I gave up, doing this sort of work by hand is just not something the human brain does very well (or at least mine doesn't). It was back to the old straw-grasping-at board.

Like most big frameworks, UNO had its own code that does special compiler magic. The Makefile lists several flags that should not be used to compile said code. I tried adding those in the Meson version. It did not help. There is also some assembly code that fiddles with registers. Maybe that was the problem? Nope again. Maybe one of the libraries is missing some preprocessor define that causes bad compilations? No. Everything seemed to be in order and doing the exact same thing as the original build system did.

LO does provide unit tests for some functionality. I had not built them with Meson yet, so I figured I'd fix some of those just to get something done. Eventually I converted one test that just exercises an Any object and it crashed in the exact same way. Now I had a reproducer for the bug with 10x to 100x less code. Once more unto the debuggers dear friends, once more!

Eventually the desync point was found. One execution went to a header file line 16 and the other went to the same header, but to line 61. It was getting fairly late so simply noticing the difference between 16 and 61 took a while. Now we had the smoking gun, but there was one more twist to this story: the header file did not have a line 61, it had only about 30 lines of text.

Nevertheless the debugger reported line 61. It even allowed one to inspect variables and all the things you would not expect to be able to do when your process execution is on a nonexisting line. What was happening? Was the debug info in the ELF files corrupted in some way? And then it finally hit me.

LibreOffice generates two different sets of headers for the same IDL files: regular and comprehensive (there is also a third bootstrap one, but let's ignore that). These headers have the same names and the same supposed behaviour but different implementations. You can #include either in your code and it will compile and sort-of-work. I don't know why the original developers had decided to tempt the ODR violation gods in this way and nobody on the LO IRC channel seemed to know either, but finally the core issue had been found. When reverse engineering the Makefiles I had found the code generation phase that creates the regular headers rather than the comprehensive ones.

The fix consisted of changing one of the code generator's command line arguments from -l to -C, recompiling (which took an hour again) and running the binary. Finally, one week's worth of debugging work was rewarded with this:


A blank window has never looked sweeter.

Final commercial announcement

If you enjoyed this post and would like to know more about the Meson build system, consider buying my ebook.

Monday, March 2, 2020

Meson manual sales numbers and a profitability estimate

The Meson Manual has been available for purchase for about two months now. This is a sufficient amount of time to be able to estimate total sales amounts and the like. As one of the goals of the project was to see if this could be a reasonable way to compensate FOSS maintainers for their work, let's go through the numbers in detail.

Income

At the time of writing, 45 copies of the book have been sold amounting to around 1350€ of income. The first month's sales were about 700€ and the second one's about 500€. This is the common sales pattern where sales start at some level and then gradually decrease as time passes. Depending on how one calculates the drop-off, this would indicate total sales over a year of 2000-2500€.

Expenses

The ecommerce site charges ~10€ per month, which is 120€ per year. Credit card processing fees should land somewhere in the 200-300€ area depending on sales. Accounting services come to about 500€. Then there are many small things that probably sum up to 100-200€ in a year. After these immediate expenses roughly 800-1300€ remains. Unfortunately this is where things get uglier.

The book was launched in my presentation at Linux.conf Australia. Having some sort of a launch is a necessity, because just putting things on the Internet does not work. This cost about 2000€ total in travel and lodging expenses. This single expense takes out all remaining income and then some. One could have gone to a conference nearer by but as Finland is far away from everywhere, the expenses are easily 500€ even for continental Europe. This also assumes that one would have gotten a talk accepted there, which is not at all guaranteed. In fact the vast majority of Meson talks I have submitted have been rejected. Thus depending on how you look, we are now either 700€ in the red or 300-800€ in the black in the best possible case.

Writing a book takes time. A lot of time. The way I got mine was to make a deal with my employer to only work four days a week for several months. I spent that one day a week (specifically Wednesday) writing the book and working out all the requisite bureaucracy. In my day job I'm a consultant and my salary is directly determined by the number of hours billed from customers. Doing the math on this says that writing the manual cost me ~9000€ compared to just being at work.

With this the total compensation of the manual comes out to a loss between 8000€ and 9500€.

So was it worth it?

It depends? As a personal endeavor writing, publishing and selling a full book is very satisfying and rewarding (even though writing it was at times incredibly tedious). But financially? No way. The break-even point seems to be about 10× the current sales. If 100× sales were possible, it might be sufficient amount for more people to take the risk and try to make a living this way. With these sales figures it's just not worth it.

Bonus chapter: visibility and marketing

The biggest problem of any project like this is marketing: how to get your project visible. This is either extremely hard or downright impossible. As an example, let's do some Fermi estimations on reachability on Twitter. Suppose you post about your new project on Twitter, how many of your followers would then buy the product? Ten percent is probably too much, and one tenth of a percent is too small, so let's say one percent. Thus to sell 400 copies you'd need to have 40 000 direct Twitter followers. Retweets do not count as their conversion rate is even lower.

Your only real chance is to get visibility on some news media, but that is also difficult. They get inundated by people and projects that want to get their products to be seen. Thus news sites publish news almost exclusively on large established projects as they are the things that interest their readers the most. Note that this should not be seen as a slag against these sites, it's just the nature of the beast. A news site posting mostly about crowdfunding campaigns of small unknown projects would not be a very tempting site and would go out of business quickly.

Advertising might work, but the downsides include a) it costs more money and b) the target audience for a programming manual is the set of people who are the most likely to use ad blockers or otherwise ignore online ads (I have never bought anything based on an ad I have seen online).

Sunday, March 1, 2020

Unity build test with Meson & LibreOffice

In a previous blog post we managed to build a notable chunk of LibreOffice with Meson. This has since been updated so you can build all top level apps (Writer, Calc, Impress, Draw). The results do not actually run, so the conversion may seem pointless. That is not the case, though, because once you have this build setup you can start doing interesting experiments on a large real world C++ code base. One of these is unity builds.

A unity build is a surprisingly simple technique to speed up builds, especially C++. The idea is that instead of compiling source files individually, you build them in larger batches by creating files like these and compiling them instead:

#include<source1.cpp>
#include<source2.cpp>
#include<source3.cpp>
// etc

One of the main downsides of this technique is writing and maintaining the unity files by hand. Fortunately Meson has builtin support for creating unity files for build targets. In the next release you can even specify how many source files you want to have in a single unity source. Enabling unity builds for a single target is simple and consists of adding the following keyword argument to a target's definition:

override_options: ['unity=on']

In this experiment we used LibreOffice's Writer target, which is a single shared library consisting of almost 700 source files. The unity block size was Meson's default of 4.

The results

Compiling source code as a unity build for the first time usually leads to build failures. This is exactly what happened here as well. There are many reasons for this. The most popular are name clashes from static and anonymous namespace functions and of classes with common names that are used without namespace qualifications. LO turned out to have at least three different classes called Size. All these issues need to be fixed either by renaming or by adding namespace qualifiers. This is boring manual work, but on the other hand you may find duplicated static functions across your code base.

Once all the issues have been fixed we can do actual measurements. The test machine was an 8 thread i7-3770 with an SSD drive and the buildmode was set to debug. The full build times are these:

Regular   10m 
Unity      4m 32s

This unity build is over 50% faster than the regular one, which is a fairly typical result. The difference would have been even bigger if we had used a bigger unity block size. Incremental builds were tested by deleting one object file and rebuilding.

Regular 26s
Unity   22s

Typically incremental unity builds are slower than regular ones, but in this case it is actually faster. This is probably because unity builds produce smaller output files that have less debug data. Thus the linker has less work. Increasing the unity block size would make the incremental build time slower.

Converting the code to compile as a unity build took on the order of three hours. Ironically most of it was spent waiting for compilations to finish. Since this saves around 5 minutes per build, the time investment is recovered after the development team has done 60 full builds. The code changes have been done with as little effort as possible, so polishing this to production quality would probably take 2-3 times as long.

Do try this at home!

The code is available in the unitytest branch of this repo, it has only been tested with Meson trunk on Linux. There are two targets that use unity builds: vcl and sw. You can toggle unity builds as well as the unity block size per target with the override_options keyword argument.

Sunday, February 23, 2020

Open source does not have a reward mechanism for tedious

A common talking point about everyday broken things is "I can't believe we put a man on the moon, but we still don't have <something seemingly straightforward>". The phrase is thankfully going out of style (maybe people are starting to forget that we did, in fact, put a man on the moon) but the sentiment still holds true. There are many things in our every day life that should be a lot better but aren't. This is also the case for open source development.

Since about the year 2000, when the term "open source" came into common usage, tens of thousands of software projects have sprung up, seemingly from thin air. The projects range from simple scripts to incredibly large and complex pieces of software. Some are purely volunteer run such as the Mame and Dolphin emulators whereas others have corporate support such as the Webkit browser engine. Yet, many seemingly simpler things are broken. Connecting projectors to laptops is still a bit of a gamble, touchpads behave wonkily almost any open source project's documentation is either thin or nonexisting, many dev tools break when faced with nonstandard setups and so on.

This does not seem to make sense. Writing a full HTML rendering engine from scratch is a lot harder than, say, writing documentation for the classes of a widget toolkit. Like most real world problems there are many causes and effects, but in this post we are going to examine the fact that the word "hard" has two different meanings. Things can be either be hard because they are difficult or they can be hard because they are tedious.

Many software developers are creators and builders. They are drawn to problems of the first type. The fact that they are difficult is not a downside, it is a challenge to be overcome. It can even be a badge of merit which you can wave around your fellow developers. These projects include things like writing your own operating system or 3D game engine, writing device drivers that saturate the fastest of transfer links, lock free atomic parallelism, distributed file systems that store exabytes of data as well as embedded firmware that has less than 1 kilobyte of RAM. Working on these kinds of problems is rewarding on its own, even if the actual product never finishes or fails horribly when eventually launched. They are, in a single word, sexy.

Most problems are not like that, but are instead the programming equivalent of ditch digging. They consist of a lot of hard work, which is not very exciting on its own but it still needs to be done. It is difficult to get volunteers to work on these kinds of problems and this is where the problem gets amplified in open source. Corporations have a very strong way to motivate people to work on tedious problems and it is called a paycheck. Volunteer driven open source development does not have a way to incentivise people in the same way. This is a shame, because the chances of success for any given software project (and startup) is directly proportional to the amount of tedious work people working on it are willing to do.

Sunday, February 16, 2020

Building even more of LibreOffice with Meson, now with graphics

I have converted some more of LO's build system to Meson as an experiment. This is the current status:


Note that this contains only the main deliverables, i.e. the shared libraries and executables. Unit tests and the like are not converted apart from a few sample tests.

It was mentioned in an earlier blog post that platform abstraction layers are the trickiest ones to build. This turns out to be the case here also. LO has at least three such frameworks (depending on how you count them). SAL is the very basic layer, UNO is a component model used to, for example, expose functionality to Java. Finally VCL is the GUI toolkit abstraction layer. Now that we have the GUI toolkit and its GTK plugin built we can build a VCL sample application and launch it. It looks like this:


This is again fairly typical of existing projects that have custom build systems, where they require a specific layout of files in the build directory or some environment settings that specify where to run things. This does not seem to be a case of incorrect build configuration, since trying to run executables directly from the build dir of a checkout built with the existing Make-based system fails in much the same way. This is a problem because the project builds code generator executables and uses them to create sources during build, and running them directly does not work. This is the sort of issue which is hard to debug as an outsider, and could really use help from people who know what the code is actually doing.

The curious can get full code can be found in this Github repo.

More info on Meson

I have written and self-published a user manual on Meson. It can be purchased via this web site

Sunday, February 9, 2020

Trying to build a slightly larger slice of Libreoffice with Meson

One of the few comments I got on my previous blog post was that building only the sal library is uninteresting because it is so small. So let's go deeper and build the base platform of LO, which is called Uno. Based on docs and slides from conference presentations, it is roughly the marked area in LO's full dependency graph.


The gloves come off

A large fraction of the code is generated from IDL files. So one might imagine that if one has a file like com/sun/foo/bar.idl then one would convert that into a header file com/sun/foo/bar.hpp using a program called idlc. One would be wrong. That is not what happens at all.

When trying to understand what humongous Make builds are doing, the best approach is not to even try. Do not look at the Makefiles, or the macro code they include. Instead run the build, capture all executed commands with Make's verbose mode and reverse engineer what the sysetm is doing based on them. Implementing the same functionality is a lot easier this way, because you know exactly what command lines you must end up with. In this particular case we find that the idl files are compiled to an intermediate binary blob using a program called unoidl-write.  After that the header files (yes, multiple per idl file) are created from this blob with a different program called cppumaker.

This has the unfortunate side effect that when you run the header generator you can not know in advance what files the program will generate and what sort of a directory hierarchy they are put in without a side channel. This makes it very difficult to integrate with build systems, because they need to know outputs exactly in order to make build steps reliable. Java has the exact same problem. Because of the way it handles inner classes, the output set of a Java compilation is unknowable beforehand. If you ever find yourself designing a code generator tool, please make sure you don't have this Sunism in your program, as it makes integration needlessly complicated and unreliable.

Even if you manage to recreate the command lines with a different system, things may still fail in interesting ways. In LO's case the internal SAL file system library has a policy that callers are not allowed to create temporary files in a directory with a relative pathname. This limitation is reported with the helpful error message of "could not open  for writing". (In case blog aggregators break the formatting, there are two spaces between the words "open" and "for".) It is also possible that relative paths appear to succeed but produce error messages such as "can not read file in legacy format" later in the process.

Other fun stuff

If one were to look at the command lines that eventually get invoked, they would look like this:


This may seem overwhelming, so let's add some markers.


An interesting question is how many processes does this spawn per source file? I don't know the correct answer, but a reasonable guess would be either 4 or 7 (5 or 8 if you count the outer shell).

Status

The code can be obtained via Github. It's Linux only and some bits are fairly ugly. It does generate the UNO code and build all the libs, but unit tests, Java and other pieces are missing and dependency tracking does not work for code generators. The build definitions consist of ~550 lines of Meson. There are a few Python helper programs in addition to these. They have a lot of duplicated code, but for a prototype such as this one it's tolerable.

I tried to go a bit further and build basegfx, but then I got stuck. The generated code has #ifdef toggles that define whether some variables are defined as ints or enums. Other libraries fail to build either when the define is set or when it is unset. For some reason basegfx has multiple files that fail both when the define is set and when it is unset (with different error messages, though). Either the code generation step goes awry or there are even more magic defines that have to be set in order for things to work.

Wednesday, February 5, 2020

Building (a very small subset of) LibreOffice with Meson

At FOSDEM I got into a discussion with a LibreOffice dev about whether it would be possible to switch LO's build system to Meson. It would be a lot of manual work for sure, but would there be any fundamental problems. Since a simple test can eliminate a ton of guesswork, I chose to take a look.

Like most cross platform programs, LO has its own platform abstraction layer called Sal. According to experience, these kinds of libraries usually have the nastiest build configurations requiring a ton of configure checks and the like. The most prominent example is GLib, whose configure steps are awe-inspiring.

Sal turned out to be fairly simple to port to Meson. It did not require all that much in platform setup, probably because the C++ stdlib provides a lot more out of the box than libc. After a few hours I could compile all of Sal and run some unit tests. The results of the experiment can be found in this Github repo. The filenames and layouts are probably not the same as in the "real" LO build, but for a simple experiment like this they'll do.

So what did we learn?

  • Basic compilation seems to be straightforward, but there were some perennial favourites including source files that you must not compile standalone, magic compilation defines, using declarations hidden at the bottom of public headers behind #ifdefs and so on.
  • There may be further platform funkiness in the GUI toolkit configuration step.
  • A lot of code seems to be generated from IDL files, which might require some work.
  • Meson's Java support probably needs some work to build all the JARs.
  • Meson should scale at least to building LO itself, building all dependency projects at the same time is a different matter.
The last of these is not really an issue on Linux, since you get almost all deps from the system. On Windows and Mac you can achieve the same by using something like Conan for dependency management. However Meson might be scalable enough to build LO and all deps in a single go, and if it's not then making it do that would be an interesting optimization challenge.

How would one express LO's sheer size with a single number?

It has more than 150 000 lines of Makefiles.

Monday, February 3, 2020

Creating your own physical or PDF manual

This post is part 2 of N describing the creation process of the Meson manual, which you can purchase via this web site. Part 1 can be read here.

There are three main ways of producing a technical book using only free software.

  1. LibreOffice + Scribus
  2. Libreoffice only
  3. LaTeX
The first option of these mimics the way traditional book publishers work. This is highly convenient when the final formatting is done by a dedicated person rather than the author. It is not so nice for solo operators, because migrating changes from LO to the Scribus program is tedious. So you either must only do the layout as the very final step or at some point switch to having the master data in Scribus and doing all edits directly there. These approaches are either cumbersome or require strong resolve to not keep changing the text after it has gone to the DTP program.

The second approach is easier, but the downside is that LibreOffice (or MS Office or other similar programs) do not produce documents that are as aesthetically pleasing as DTP programs. I don't know the specifics, but at least the line and page splitting algorithms seem to produce worse results. This is probably because they need to be fast enough to be used in real time. There are also various reliability and layout issues. For example it is difficult to get figures to remain where you want them as the text is edited, and I have experienced cases where entries in the bibliography disappear for no reason. Also, if you are not very pedantic in using styles, changing the global document appearance is problematic.

Enter LaTeX!

The Meson manual is written in LaTeX. It is a bit quirky, but there are many major features that other systems do not provide (or at least not easily). The main point is that in LaTeX you only write the structure of the document and the system takes care of all formatting, kind of like HTML and CSS work. The default look produced by LaTeX is magical in the way that it provides an air of gravitas to any piece of text automatically.

Splitting the style and formatting is nice in that you get to work in the "markdown" syntax level when editing but can easily see the typeset version of your text at any time. Since LaTeX is plain text, you can use revision control tools to manage your book sources just like source code. Some things that are difficult in GUI apps work effortlessly in LaTeX. Just the way it handles floating figures is great and saves you so much time and effort that it could be worth the price of admission by itself.

One of the big problems in writing a book about programming is to keep your code samples both up to date and working. LaTeX provides a simple and elegant solution to this problem. Since it is just a macro processing system, it provides a way to include text from standalone files on the file system. In the Meson manual all code samples are written in standalone projects and there is a script that builds and runs all of them. Or, in other words, with LaTeX you can write unit tests for your book.

The main downside of LaTeX is that its output looks exactly like LaTeX and if you want your book to look different, it takes a fair bit of work. The syntax takes a bit of getting used to and if your keyboard layout requires multiple keys to type a backslash, typing it can get tiring.  There are tools that can convert e.g. markdown to LaTeX to make the writing process easier, but usually you need to do some fine tuning on the output or use custom LaTeX packages to get the output you want

How many copies of the manual have been sold thus far?

31.

If you are a corporation and want to support the project by buying a bulk license for all your employees, send me an email.

Sunday, January 26, 2020

How the Meson manual sales pipeline is set up and how to set your own

Setting up all the pieces to get the Meson manual sales page up and running was a fair bit of work. Since othe people might be interested in setting up something similar for their projects, here are some random notes of things that I had to do. All of this comes with the usual disclaimer that this is not accounting or legal advice, speak to an actual professional before embarking on your own venture.

A company

The first thing you need is a company. IIRC credit card processors either only deal with corporations or they charge a lot more in processing fees for individuals. The choice of corporation type depends on the country you live in.

A sales platform

This is the platform that provides the "web store", manages product downloads and so on. The Meson manual uses SendOwl, but there are other providers as well. In theory you could write this yourself, it is only a web shop after all, but then you get the headache of ops, backups, storing user data in a GDPR compliant manner and all that jazz. Using an existing store is fairly cheap and saves you a ton of work.

A credit card processor

This operator takes care of charging money from users' credit cards and delivering it to you. Your choices are limited to those supported by your sales platform. SendOwl supports PayPal and Stripe and Meson uses the latter because their fees were noticeably lower. Interestingly Stripe requires a "proof of location" via a utility bill. This is an interesting challenge in countries like Finland where this is a completely foreign concept.

Web site hosting

The requirements here depend on how fancy a web site you want to have. Meson's is a single static page with a link to the sales platform. 

Taxes

This is a very complicated topic. In fact even more complicated than that page implies. The crux is that a seller of digital goods may have the responsibility to gather and pay VAT and/or sales tax to the countries they sell to, not just their own country. Most countries have a minimum threshold (such as 100 000 dollars) of sales before a foreign operator needs to register and gather tax. Some don't (meaning the threshold is zero).

The EU is quite simple if you are within the EU. You need to register to a special VAT MOSS program, gather the necessary (country dependent) tax and then report and pay it to the tax authorities of your own country. The sales platform will automatically calculate the correct tax amount and provide a report required by the tax office. For EU residents this is highly convenient. For operators outside of the EU it is a bit more work as you need to register, but only in one EU country of your choice. All bureaucracy is dealt with through this single point.

Countries that have a sufficiently high lower limit you don't have to do anything about. Countries with low limits can be geoblocked in the sales platform.

This leaves the United States, which is special. Each state has their own way of dealing with sales tax and you can't geoblock individual states.

GDPR et al compliance

If you use third party sales platforms and credit card processors, and never store any transaction data on your own servers this is actually fairly simple. Stripe will even autogenerate a compliance document for you automatically.

How many copies of the Meson manual have been sold through the web site by now?

22.

That is about half the amount of people who participated in the failed indiegogo campaign last year.

Tuesday, January 21, 2020

The Meson Manual is now available for purchase

Some of you might remember that last year I ran a crowdfunding campaign to create a full written user manual for Meson. That failed fairly spectacularly, mostly due to the difficulty of getting any sort of visibility for these kinds of projects (i.e. on the Internet, everything drowns).

Not taking the hint I chose to write and publish it on my own anyway. It is now available on this web page for the price of 29.95€ plus a tax that depends on the country of purchase. Some countries which have unreasonable requirements for foreign online sellers such as India, Russia and South Korea have been geoblocked. Sorry about that. However you can still buy the book if you are traveling outside the country in question, but in that case all tax responsibilities for importing fall upon you.

What if you don't care about books?

I don't have a Patreon or any other crowdfunding thing ongoing, because of the considerable legal uncertainties of running a donation based service for the public good in Finland. Selling digital goods for money is fine, so this is a convenient way for people to support my work on Meson financially.

Will the book be made available under a free license?

No. We already have one set of free documentation on the project web site. Everyone is free to use and contribute that documentation. This book contains no text from the existing documentation, it is all new and written from scratch.

Is it available as a hard copy?

No, the only available format is PDF. This is both to save trees and because international shipping of physical items is both time consuming and expensive.

Getting review copies

If you are a journalist and wish to write a review of the book for a publication, send me an email and I'll provide you with a free review copy.

When was the book first made public?

It was announced at the very beginning of my LCA2020 talk. See it for yourself in the embedded video below.

Can you post about this on your favourite social media site / news aggregator / etc?

Yes, by all means. It is hard to get visibility without so I appreciate all the help I can get.

What was that site's URL again?