Wednesday, May 13, 2020

The need for some sort of a general trademark/logo license

As usual I am not a lawyer and this is not legal advice.

The problem of software licenses is fairly well understood and there are many established alternatives to choose from based on your needs. This is not the case for licenses governing assets such as images and logos and especially trademarks. Many organisations, such as Gnome and the Linux Foundation have their own trademark policy pages, but they seem to be tailored to those specific organizations. There does not seem to be a kind of a "General project trademark and logo license", for lack of a better term, that people could apply to their projects.

An example case: Meson's logo

The Meson build system's name is a registered trademark. In addition it has a logo which is not. The things we would want to do with it include:
  • Allow people to use to logo when referring to the project in an editorial fashion (this may already be a legal right regardless, in some jurisdictions at least, but IANAL and all that)
  • Allow people to use the logo in other applications that integrate with Meson, in particular IDEs should be able to use it in their GUIs
  • People should be allowed to change the image format to suit their needs, logos are typically provided as SVGs, but for icon use one might want to use PNG instead
  • People should not be allowed to use the logos and trademarks in a way that would imply they are endorsing any particular product or service
  • People should not be allowed to create and sell third party merchandising (shirts etc) using the logo
  • Achieve all this while maintaining compliance with DFSG, various corporate legal requirements, established best practices for trademark protection and all that.
Getting all of these at the same time is really hard. As an example the Creative Commons licenses can be split in two based on whether they permit commercial use. All those that do permit it fail because they (seem to) permit the creation and sales of third party merchandise. Those that prohibit commercial are problematic because they prevent companies from shipping a commercial IDE product that uses the logo to identify Meson integration (which is something we do want to support, that is what a logo is for after all). This could also be seen as discriminating against certain fields of endeavour, which is contrary to things like the GPL's freedom zero and DFSG guideline #6.

Due to this the current approach we have is that logo usage requires individual permission from me personally. This is an awful solution, but since I am just a random dude on the Internet with a lawyer budget of exactly zero, it's about the only thing I can do. What would be great is if the entities who do have the necessary resources and expertise would create such a license and would then publish it freely so FOSS projects could just use it just as easily as picking a license for their code.

Monday, May 11, 2020

Enforcing locking with C++ nonmovable types

Let's say you have a struct with some variable protected by a mutex like this:

struct UnsafeData {
  int x;
  std::mutex ;
};

You should only be able to change x when the mutex is being held. A typical solution is to make x private and then create a method like this:

void UnsafeData::set_x(int newx) {
  // WHOOPS, forgot to lock mutex here.
  x = newx;
}

It is a common mistake that when code is changed, someone, somewhere forgots to add a lock guard. The problem is even bigger if the variable is a full object or a handle that you would like to "pass out" to the caller so they can use it outside the body of the struct. This caller also needs to release the lock when it's done.

This brings up an interesting question: can we implement a scheme which only permits safe accesses to the variables in a way that the users can not circumvent [0] and which has zero performance penalty compared to writing optimal lock/unlock function calls by hand and which uses only standard C++?

Initial approaches

The first idea would be to do something like:

int& get_x(std::lock_guard<std::mutex> &lock);

This does not work because the lifetimes of the lock and the int reference are not enforced to be the same. It is entirely possible to drop the lock but keep the reference and then use x without the lock by accident.

A second approach would be something like:

struct PointerAndLock {
  int *x;
  std::lock_guard<std::mutex> lock;
};

PointerAndLock get_x();

This is better, but does not work. Lock objects are special and they can't be copied or moved so for this to work the lock object must be stored in the heap, meaning a call to new. You could pass that in as an out-param but those are icky. That would also be problematic in that the caller creates the object uninitialised, meaning that x points to garbage values (or nullptr). Murpy's law states that sooner or later one of those gets used incorrectly. We'd want to make these cases impossible by construction.

The implementation

It turns out that this has not been possible to do until C++ added the concept of guaranteed copy elision. It means that it is possible to return objects from functions via neither copy or a move. It's as if they were automatically created in the scope of the calling function. If you are interested in how that works, googling for "return slot" should get you the information you need.  With this the actual implementation is not particularly complex. First we have the data struct:

struct Data {
    friend struct Proxy;
    Proxy get_x();

private:
    int x;
    mutable std::mutex m;
};

This struct only holds the data. It does not manipulate it in any way. Every data member is private, so the struct itself and its Proxy friend can poke them directly. All accesses go via the Proxy struct, whose implementation is this:

struct Proxy {
    int &x;

    explicit Proxy(Data &input) : x(input.x), l(input.m) {}

    Proxy(const Proxy &) = delete;
    Proxy(Proxy &&) = delete;
    Proxy& operator=(const Proxy&) = delete;
    Proxy& operator=(Proxy &&) = delete;


private:
    std::lock_guard<std::mutex> l;
};

This struct is not copyable or movable. Once created the only things you can do with it are to access x and to destroy the entire object. Thanks to guaranteed copy elision, you can return it from a function, which is exactly what we need.

The creating function is simply:

Proxy Data::get_x() {
    return Proxy(*this);
}

Using the result feels nice and natural:

void set_x(Data &d) {
    // d.x = 3 does not compile
    auto p = d.get_x();
    p.x = 3;
}

This has all the requirements we need. Callers can only access data entities when they are holding the mutex [1]. They do not and in deed can not release the mutex accidentally because it is marked private. The lifetime of the variable is tied to the life time of the lock, they both vanish at the exact same time. It is not possible to create half initialised or stale Proxy objects, they are always valid. Even better, the compiler produces assembly that is identical to the manual version, as can be seen via this handy godbolt link.

[0] Unless they manually reinterpret cast objects to char pointers and poke their internals by hand. There is no way to prevent this.

[1] Unless they manually create a pointer to the underlying int and stash it somewhere. There is no way to prevent this.

Monday, May 4, 2020

Let's talk meta

In my previous blog post about old techs causing problems in getting new developers on board. In it I had the following statement:
As a first order approximation, nobody under the age of 35 knows how to code in Perl, let alone would be willing to sacrifice their free time doing it.
When I wrote this, I spent a lot of time thinking whether I should add a footnote or extra sentence saying, roughly, that I'm not claiming that there are no people under 35 who know Perl, but that it is a skill that has gotten quite rare compared to ye olden times. The reason for adding extra text is that I feared that someone would inevitably come in and derail the discussion with some variation of "I'm under 35 and I know Perl, so the entire post is wrong".

In the end I chose not to put the clarification in the post. After all it was written slightly tongue-in-cheek, and even specifically says that this is not The Truth (TM), but just an approximation. The post was published. It got linked on a discussion forum. One of the very first comments was this:


This is what makes blogging on the Internet such a frustrating experience. Every single sentence you write has to be scrutinised from all angles and then padded and guarded so its meaning can not be sneakily undermined in this way. This is tiring, as it is difficult to get a good writing flow going. It may also make the text less readable and enjoyable. It makes blogging less fun and thus people less likely to want to do it.

An alternative to this is to not ready any comments. This works, but then you are flying blind. You can't tell what writing is good and which is not and you certainly can't improve. The Internet has ruined everything.

Meta-notes

Contrary to the claim made above, the Internet has not, in fact, ruined everything. The statement is hyperbole, stemming from the author's feelings of frustration. In reality the Internet has improved the quality of life of most people on the earth by a tremendous amount and should be considered as one of the greatest inventions of mankind.

"Ye olden times" was not written as "├że olden times" because in the thorny battle between orthographic accuracy and readability the latter won.

The phrase "flying blind" refers neither to actual flying nor to actual blindness. It is merely a figure of speech for any behaviour that is done in isolation without external feedback. You should never operate any vehicle under any sort of vision impairment unless you have been specifically trained and authorized to do so by the appropriate authorities.

Meta-meta-notes

The notes above were not written because the author thought that readers would take the original statements literally. Instead they are there to illustrate what would happen if the defensive approach to writing, as laid out in the post, were taken to absurd extremes. It exists purely for the purposes of comedy. As does this chapter.

Saturday, May 2, 2020

You have to kill your perlings

Preface

This blog post deals only with the social and "human" aspects of various technologies. It is not about the technical merits of any programming language or other such tech. If you intend to write a scathing Reddit comment along the lines of "this guy is an idiot, Perl is a great language for X because you can do Y, Z and W", please don't. That is not the point of this post. Perl was chosen as the canonical example mostly due to its prevalence, the same points apply for things like CORBA, TCL, needing to write XML files by hand, ridiculously long compilation times and so on.

What is the issue at hand?

The 90s and early 2000s a lot of code was written. As was fashionable at the time, a lot of it was done using Perl. As open source projects are chronically underfunded, a lot of that code is still running. In fact a lot of the core infrastructure of Debian, the BSDs and other such foundational projects is written in Perl. When told about this, many people give the "project manager" reply and say that since the code exists, works and is doing what it should, everything is fine. But it's really not, and to find out why, let's look at the following graph.

Graph of number of people capable and willing to work on Perl. The values peak at 2000 and plummet to zero by 2020.

As we can see the pool of people willing to work on Perl projects is shrinking fast. This is a major problem for open source, since a healthy project requires a steady influx of new contributors, developers and volunteers. As a first order approximation, nobody under the age of 35 knows how to code in Perl, let alone would be willing to sacrifice their free time doing it.

One could go into long debates and discussions about why this is, how millennials are killing open source and how everyone should just "man up" and start writing sigils in their variable names. It would be pointless, though. The reasons people don't want to do Perl are irrelevant, the only thing that matters is that the use of Perl is actively driving away potential project contributors. That is the zeitgeist. The only thing you can do is to adapt to it. That means migrating your project from Perl to something else.

But isn't that risky and a lot of work?

Yes. A lot of Perl code is foundational. In many cases the people who wrote it have left and no-one has touched it in years. Changing it is risky. No matter how careful you are, there will be bugs. Nasty bugs. Hard to trace bugs. Bugs that work together with other bugs to cancel each other out. It will be a lot of hard work, but that is the price you have to pay to keep your project vibrant.

An alternative is to do nothing. If your project never needs to change, then this might be a reasonable course of action. However if something happens and major changes are needed (and one thing we have learned this year is that unexpected things actually do happen) then you might end up as the FOSS equivalent of the New Jersey mayor trying to find people to code COBOL for free.

Sunday, April 19, 2020

Do humans or compilers produce faster code?

Modern optimizing compilers are truly amazing. They have tons and tons of tricks to make even crappy code run incredibly fast. Faster than most people could write by hand. This has lead some people to claim that program optimization is something that can be left to compilers, as they seem to be a lot better at it. This usually sparks a reply from people on the other end of the spectrum that say that they can write faster code by hand than any compiler which makes compilers mostly worthless when performance actually matters.

In a way both of these viewpoints are correct. In another way they are both wrong. To see how, let's split this issue into two parts.

A human being can write faster code than a compiler for any given program

This one is fairly easy to prove (semi)formally. Suppose you have a program P written in some programming language L that runs faster than any hand written version. A human being can look at the assembly output of that program and write an equivalent source version in straight C. Usually when doing this you find some specific optimization that you can add to make the hand written version faster.

Even if the compiler's output were proved optimal (such as in the case of superoptimization), it can still be matched by copying the output into your own program as inline assembly. Thus we have proven that for any program humans will always be faster.

A human being can not write faster code than a compiler for every program

Let's take something like Firefox. We know from the previous chapter that one could eschew complex compiler optimizations and rewrite it in straight C or equivalent and end up with better performance. The downside is that you would die of old age before the task would be finished.

Human beings have a limited shelf life. There are only so many times they can press a key on the keyboard until they expire. Rewriting Firefox to code that works faster with straight C than the current version with all optimizations enabled is just too big of a task.

Even if by some magic you could do this, during the rewrite the requirements on the browser would change. A lot. The end result would be useless until you add all the new functionality that was added since then. This would lead to eternal chasing of the tail lights of the existing project.

And even if you could do that, optimizing compilers keep getting better, so you'd need to go through your entire code base regularly and add the same optimizations by hand to keep up. All of these things could be done in theory, but they completely impossible in practice.

The entire question poorly posed

Asking whether compilers and humans write faster code is kind of like asking which one is "bluer", the sea or the sky. Sure you could spend years debating the issue on Twitter without getting anywhere, but it's not particularly interesting. A more productive way is to instead ask the question "given the requirements, skills and resources I have available, should I hand-optimize this particular routine or leave it to the compiler".

If you do this you rediscover the basic engineering adage: you get the best bang for the buck by relying on the compiler by default and doing things by hand only for bottlenecks that are discovered by profiling the running application.

PS. Unoptimized C is slow, too

Some people think that when they write C it is "very close" to the underlying assembly and thus does not benefit much from compiler optimizations. This has not been true for years (possibly decades). The performance difference between no optimization and -O2 can be massive, especially for hot inner loops.

When people say that they can write code that is faster than compiler optimized version of the same algorithm in some other language, that is not what they are actually saying. Unless they are writing 100% pure ASM by hand [0] that is not what they are saying. Instead they are saying "I can take any algorithm implementation, write it with an alternative syntax and, when both of these are run through their respective optimizing compilers, end up with a program that runs faster".

[0] Which does happen sometimes, especially for SIMD code.

Saturday, April 11, 2020

Your statement is 100% correct but misses the entire point

Let's assume that there is a discussion going on on the Internet about programming languages. One of the design points that come up is a garbage collector. One participant mentions the advantages of garbage collection with something like this:
Garbage collectors are nice and save a lot of work. If your application does not have strict latency requirements, not having to care about memory management is liberating and can improve developer efficiency by a lot.
This is a fairly neutral statement that most people would agree with, even if they work on code that has strict real time requirements. Yet, inevitably, someone will present this counterpoint.
No! If you have dangling references memory is never freed and you have to fix that by doing manual memory management anyway. Garbage collectors do not magically fix all bugs.
If you read through the sentences carefully you'll notice that every asserted statement in it is true. That is what makes it so frustrating to argue against. Most people with engineering backgrounds are quite willing to admit they are wrong when presented with evidence that their statements are not correct. This does not cover everyone, of course, as some people are quite willing to violently disagree with any and all facts that are in conflict with their pre-held beliefs. We'll ignore those people for the purpose of this post.

While true, that single sentence ignores all of the larger context of the issue, which contains points like the following:

  • Dangling reference out of memory errors are rare (maybe 1 in 10 programs?) whereas regular memory bugs like use-after-free, double free, off by one errors etc are very common (100-1000 in every program).
  • Modern GCs have very good profilers, finding dangling references is a lot easier than debugging stack corruptions.
  • Being able create things on a whim and just drop them to the floor makes programmers a lot more productive than forcing them to micromanage the complete life cycle of every single resource.
  • Even if you encounter a dangling reference issue, fixing it probably takes less time than would have gone to fixing memory corruption issues in a GCless version of the same app.
In brief, the actual sentence is true but misses the entire point of the comment they are replying to. This is sadly common on Internet debates. Let's see some examples.

Computer security

A statement like this:
Using HTTPS on all web traffic is good for security and anonymity.
 might be countered with something like this:
That provides no real security, if the NSA want your data they will break into your apartment and get it.
This statement is again absolutely true. On the other hand if you are not the leader of a nation state or do regular business with international drug cartels, you are unlikely to be the target of a directed NSA offensive.

If you think that this is a stupid point that nobody would ever make, I agree with you completely. I have also seen it used in the real world. I wish I hadn't.

Bugs are caused by incompetents

High level programming languages are nice.
Programming languages that guard against buffer overruns is great for security and ease of development.
But not for everyone.
You can achieve the exact same thing in C, you just have to be careful.
This is again true. If every single developer on a code base is being 100% focused and 100% careful 100% of the time, then bug free code is possible.  Reality has shown time and time again that it is not possible, human beings are simply not capable of operating flawlessly for extended periods of time.

Yagni? What Yagni?

There's the simple.
Processing text files with Python is really nice and simple.
And not so simple.
Python is a complete joke, it will fail hard when you need to process ten million files a second on an embedded microcontroller using at most 2 k of RAM.
Yes. Yes it does. In that use case it would be the wrong choice. You are absolutely correct. Thank you for your insight, good sir, here is a shiny solid gold medal to commemorate your important contribution to this discussion.

What could be the cause of this?

The one thing that school trains you for is that being right is what matters. If you get the answers right in your test, then you get a good grade. Get them wrong and you don't. Maybe this frame of mind "sticks on" once you leave school, especially given that most people who post these kinds of comments seem to be from the "smarter" end of the spectrum (personal opinion, not based on any actual research). In the real world being right is not a merit by itself. In any debate being right is important, of course, but the much more important feature is being relevant. That requires understanding the wider context and possibly admitting that something that is the most important thing in the world to you personally, might be completely irrelevant for the issue at hand.

Being right is easy. Being relevant is extremely difficult.

Sunday, April 5, 2020

Meson manual sales status and price adjustment

The sales dashboard of the Meson manual currently looks like this.

It splits up quite nicely into three parts. The first one is the regular sales from the beginning of the year, which is on average less than one sale per day.

The second part (marked with a line) indicates when I was a guest on CppCast talking about Meson and the book. As an experiment I created a time limited discount coupon so that all listeners could buy it with €10 off. As you can tell from the graph it did have an immediate response, which again proves that marketing and visibility are the things that actually matter when trying to sell any product.

After that we have the "new normal", which means no sales at all. I don't know if this is caused by the coronavirus isolation or whether this is the natural end of life for the product (hopefully the former but you can never really tell in advance).

Price reduction

Thus, effective immediately, the price of the book has been reduced to €24.95. You can purchase it from the official site.