Sunday, August 30, 2020

Resist the urge to argue about app store security

Recently Miguel de Icaza wrote a blog post arguing that closed computing platforms where a major US corporation decides what software users are allowed to install are a good thing. This has, naturally, caused people to become either confused, disappointed or angry. Presumably many people are writing responses and angry comments. I almost started one writing one pointing out all the issues I found in the post.

Doing that would probably create a fairly popular blog post with followups. It might even get to the reddits and hackernewses and generate tons of comments where people would duke it out on issues on user choice vs the safety provided by a curated walled garden. There would be hundreds, if not thousands, of snarky tweets that make their poster feel superior for a while but are ultimately not productive. To quote Star Trek Deep Space Nine:
Spare me please-think-of-the-children speech and I'll spare you the users-must-have-control-over-their-own-devices speech [1].
Having us, the user and developer community, argue about this issue is pointless, unproductive and actively harmful. This particular phenomenon is not new, it even has a name. In fact this approach is so old that the name is in latin: Divide et impera. Divide and conquer. All the time and energy that we spend on arguing this issue among ourselves is time not spent on working towards a solution.

The actual solution to this issue is conceptually so simple it could be called trivial. The entire problem at hand is one that has been created by Apple. They are also the ones that can solve it. All they have to do is to add one new piece of functionality to iOS devices. Specifically that users who so choose, can change an option in the device they own allowing them to download, install and use any application binaries freely from the Internet. Enabling this functionality could be done, for example, in a similar way to how Android phones enable developer mode. Once implemented Apple would then make a public statement saying that this workflow is fully supported and that applications obtained in this way will, now and forevermore, have access to all the same APIs as official store apps do.

This is all it takes! Further, they could make it so that IT departments and concerned parents could disable this functionality on their employees' and children's devices so that they can only obtain apps via the app store. This gives both sets of users exactly what they want. Those who prefer living in a walled curated garden can do so. Those with the desire and willingness to venture outside the walls and take responsibility of their own actions can do so too and still get all the benefits of sandboxing and base platform security.

Apple could do this. Apple could have done this at launch. Apple could have done this at any time since. Apple has actively chosen not to do this. Keep this is mind if you ever end up arguing about this issue on the Internet. People who have different priorities and preferences are not "the enemy". If you get into the flaming and the shouting you have been divided. And you have been conquered.

[1] Might not be a word-for-word accurate transcription.

Friday, August 28, 2020

It ain't easy being a unique snowflake, the laptop purchasing edition

As the lockdown drags on I have felt the need to buy a new laptop as my old one is starting to age. A new category in the laptop market is 2-in-1 laptops with full drawing pen support. It has been a long time since I have done any real drawing or painting and this seemed like a nice way to get back into that. On the other hand the computer also needs to be performant for heavy duty software development such as running the Meson test suite (which is surprisingly heavy) and things like compiling LLVM. This yields the following requirements:

  • A good keyboard, on the level of Thinkpads or 2015-ish Macbook Pros
  • 13 or 14 inch screen, at least Retina level resolution, HDR would be nice, as matte as possible
  • Pressure and tilt support for the pen
  • At least 16GB of RAM
  • USB A, C and HDMI connectors would be nice
  • At least Iris level graphics with free drivers (so no NVidia)
  • A replaceable battery would be nice
  • Working dual boog (I need Win10 for various things), full Linux support (including wifi et al)
After a bunch of research it turns out that this set of requirements might be impossible to fullfill. Here are the tree best examples that I found.

HP Elite Dragonfly

The documentation is unclear on whether the pen supports tilting. More importantly this model is only available with Intel 8th gen processors and online reviews say the screen is super glossy.

Lenovo Yoga X1 Gen5

Pen does not support tilting. The graphics card is a standard Intel HD, not Iris.

Dell XPS 13 2-in-1 7390

This is the one that gets the closest. There is no USB A or HDMI connectors, and the pen requires an AAAA battery. This is annoying but tolerable. Unfortunately the keyboard is of the super thin type, which is just not usable.

So now what?

Probably going to stick with the current ones for now. New models come out at a fairly steady pace, so maybe this mythical white whale will become available for purchase at some point. Alternatively I might eventually just fold, give up on some requirements and buy a compromise machine. Typically this causes the dream machine to become available for purchase immediately afterwards, when it is too late. 

Monday, August 17, 2020

Most "mandatory requirements" in corporations are imaginary

In my day job I work as a consultant. Roughly six months ago my current client had a non-negotiable requirement that consultants are not allowed to work remotely. Employees did have this privilege. This is, clearly, a stupid policy and there have been many attempts across the years to get it changed. Every time someone in the management chain has axed the proposal with some variation of "this policy can not be changed because this is our policy and thus can not be changed", possibly with a "due to security reasons" thrown in there somewhere.

Then COVID-19 happened and the decision came to lock down the head office. Less than a day after this everyone (including consultants) got remote working rights, VPN tokens and all other bits and bobs. The old immutable, mandatory, unalterable and just-plain-how-we-do-things rules and regulations seemed to vanish to thin air in the blink of an eye. The question is why did this happen?

The answer is simple: because it became mandatory to due to external pressure. A more interesting question would be if it really was that simple, how come this had not happened before? Further, are the same reasons that blocked this obvious improvement for so long are also holding back other policy braindeadisms that reduce employee productivity. Unfortunately the answers here are not as clear-cut and different organizations may have different underlying causes for the same problem.

That being said, let's look at one potential causes: the gain/risk imbalance. Typically many policy and tech problems occur at a fairly low level. Changes benefit mostly the people who, as one might say, are doing the actual work. In graph form it might look like this.


This is not particularly surprising. People higher up the management chain have a lot on their plate. They probably could not spend the time to learn about benefits from low level work flow changes even if they wanted to and the actual change will be invisible to them. On the other hand managers are fairly well trained to detect and minimize risk. This is where things fall down because the risk profile for this policy change is the exact opposite.


The big question on managers' minds (either consciously or unconsciously) when approving a policy change is "if I do this and anything goes wrong, will I get blamed?". This should not be the basis of choice but in practice it sadly is. This is where things go wrong. The people who would most benefit from the change (and thus have the biggest incentive to get it fixed) do not get to make the call. Instead it goes to people who will see no personal benefit, only risk. After all, the current policy has worked well thus far so don't fix it if it is not broken. Just like no one ever got fired for buying IBM, no manager has ever gotten reprimanded for choosing to uphold existing policy.

This is an unfortunate recurring organizational problem. Many policies are chosen without full knowledge of the problem sometimes this information is even impossible to get if the issue is new and no best practices have yet been discovered. Then it becomes standard practice and then a mandatory requirement that can't be changed, even though it is not really based on anything, provides no benefit and just causes harm. Such are the downsides of hierarchical organization charts.

Saturday, August 8, 2020

The second edition of the Meson manual is out

I have just uploaded the second edition of the Meson manual to the web store for your purchasing pleasure. New content includes things like:

  • Reference documentation updated to match 0.55
  • A new chapter describing "random tidbits" for practical usage
  • Cross compilation chapter update with new content
  • Bug fixes and minor updates
Everyone who has bought the book can download this version (and in fact all future updates) for free.

Sales stats up to this point

The total number of sales is roughly 130 copies corresponding to 3500€ in total sales. Expenses thus far are a fair bit more than that.

Wednesday, July 29, 2020

About that "Google always builds everything from source every time" thing

One of the big battles of dependencies is whether you should use somehow prebuilt libraries (e.g. the Debian-style system packaging) or vendor all the sources of your deps and build everything yourself. Whenever this debat gets going, someone is going to do a "well, actually" and say some kind of variation of this:
Google vendors all dependencies and builds everything from source! Therefore that is clearly the right thing to do and we should do that also.
The obvious counterargument to this is the tried-and-true if all your friends jumped off a bridge would you do it too response known by every parent in the world. The second, much lesser known counterargument is that this statement is not actually true.

Google does not actually rebuild all code in their projects from source. Don't believe me? Here's exhibit A:


The original presentation video can be found here. Note that the slide and the video speak of multiple prebuilt dependencies, not just one [0]. Thus we find that even Google, with all of their power, money, superior engineering talent and insistence to rebuild everything from source, do not rebuild everything from source. Instead they have to occasionally use prebuilt third party libraries just like the rest of the world. Thus a more accurate form of the above statement would be this:
Google vendors most dependencies and builds everything from source when possible and when that is not the case they use prebuilt libraries but try to keep quiet about it in public because it would undermine their publicly made assertion that everyone should always rebuild everything from source.
The solution to this is obvious: you just rebuild all things that you can from sources and get the rest as prebuilt libraries. What's the big deal here? By itself there would not be, but this ideology has consequences. There are many tools and even programming languages designed nowadays that only handle the build-from-source case because obviously everyone has the source code for all their deps. Unfortunately this is just not true. No matter how bad prebuilt no-access-to-source libraries are [1], they are also a fact of life and must be natively supported. Not doing that is a major adoption blocker. This is one of the unfortunate side effects of dealing with all the nastiness of the real world instead of a neat idealized sandbox.

[0] This presentation is a few years old. It is unknown whether there are still prebuilt third party libraries in use.

[1] Usually they are pretty bad.

Sunday, July 26, 2020

Pinebook Pro longer term usage report

I bought a Pinebook Pro in the first batch, and have been using it on and off for several months now. Some people I know wanted to know if it is usable as a daily main laptop.

Sadly, it is not. Not for me at least. It is fairly close though.

Random bunch of things that are annoying or broken

I originally wanted to use stock Debian but at some point the Panfrost driver broke and the laptop could not start X. Eventually I gave up and switched to the default Manjaro. Its installer does not support an encrypted root file system. A laptop without an encrypted disk is not really usable as a laptop as you can't take it out of your house.

The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs. As an extreme example switching between channels in Slack takes five to ten seconds. It is unbearably slow. The wifi is not very good, it can't connect reliably to an access point in the next room (distance of about 5 meters). The wifi behaviour seems to be distro dependent so maybe there are some knobs to twiddle.

Video playback on browsers is not really nice. Youtube works in the default size, but fullscreen causes a massive frame rate drop. Fullscreen video playback in e.g. VLC is smooth.

Basic shell operations are sluggish too. I have a ZSH prompt that shows the Git status of the current directory. Entering in a directory that has a Git repo freezes the terminal for several seconds. Basically every time you need to get something from disk that is not already in cache leads to a noticeable delay.

The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it. Scale of 1 is a bit too small and 2 is way too big. The screen is matte, which is totally awesome, but unfortunately the colors are a bit muted and for some reason it seems a bit fuzzy. This may be because I have not used a sub-retina level laptop displays in years.

The trackpad's motion detector is rubbish at slow speeds. There is a firmware update that makes it better but it's still not great. According to the forums someone has already reverse engineered the trackpad and created an unofficial firmware that is better. I have not tried it. Manjaro does not provide a way to disable tap-to-click (a.k.a. the stupidest UI misfeature ever invented including the emojibar) which is maddening. This is not a hardware issue, though, as e.g. Debian's Gnome does provide this functionality. The keyboard is okayish, but sometimes detects keypresses twice, which is also annoying.

For light development work the setup is almost usable. I wrote a simple 3D model viewer app using Qt Creator and it was surprisingly smooth all round, the 3D drivers worked reliably and so on. Unfortunately invoking the compiler was again sluggish (this was C++, though, so some is expected). Even simple files that compile instantly on x86_64 took seconds to build.

Can the issues be fixed?

It's hard to say. The Panfrost driver is under heavy development, so it will probably keep getting better. That should fix at least the video playback issues. Many of the remaining issues seem to be on the CPU and disk side, though. It is unknown whether there are enough optimization gains to be had to make the experience fully smooth and, more importantly, whether there are people doing that work. It seems feasible that the next generation of hardware will be fast enough for daily usage.

Bonus content: compiling Clang

Just to see what would happen, I tried whether it would be possible to compile Clang from source (it being the heaviest fairly-easy-to-build program that I know of). It turns out that you can, here are the steps for those who want to try it themselves:
  • Checkout Clang sources
  • Create an 8 GB swap file and enable it
  • Configure Clang,  add -fuse-ld=gold to linker flags (according to Clang docs there should be a builtin option for this but in reality there isn't) and set max parallel link jobs to 1
  • Start compilation with ninja -j 4 (any more and the compilation jobs cause a swap storm)
  • If one of the linker jobs cause a swap storm, kill the processes and build the problematic library by hand with ninja bin/libheavy.so
  • Start parallel compilation again and if it freezes, repeat as above
After about 7-8 hours you should be done.

Sunday, July 19, 2020

The ABI stability matryoshka

In the C++ on Sea conference last week Herb Sutter had a talk about replacing an established thingy with a new version. Obviously the case of ABI stability came up and he answered with the following (the video is not available so this quote is only approximate, though there is an earlier version of the talk viewable here):
Backwards compatibility is important so that old code can keep working. When upgrading to a new system it would be great if you could voluntarily opt into using the old ABI. So far no-one has managed to do this but if we could crack this particular nut, making major ABI changes would become a lot easier.
Let's try to do exactly that. We'll start with a second (unattributed) quote that often gets thrown around in ABI stability discussions:
Programming language specifications do not even acknowledge the existance of an ABI. It is wholly a distro/tool vendor problem and they should be the ones to solve it.
Going from this we can find out the actual underlying problem, which is running programs of two different ABI versions at the same time on the same OS. The simple solution of rebuilding the world from scratch does not work. It could be done for the base platform but, due to business and other reasons, you can't enforce a rebuild of all user applications (and those users, lest we forget, pay a very hefty amount of money to OS vendors for the platform their apps run on). Mixing new and old ABI apps is fragile and might fail due to the weirdest of reasons no matter how careful you are. The problem is even more difficult in "rolling release" cases where you can't easily rebuild the entire world in one go such as Debian unstable, but we'll ignore that case for now.

It turns out that there already exists a solution for doing exactly this: Flatpak. Its entire reason of existance is to run binaries with different ABI (and even API) on a given Linux platform while making it appear as if it was running on the actual host. There are other ways of achieving the same, such as Docker or systemd-nspawn, but they aim to isolate the two things from each other rather than unifying them. Thus a potential solution to the problem is that whenever an OS breaks ABI compatibility in a major way (which should be rare, like once every few years) it should provide the old ABI version of itself as a Flatpak and run legacy applications that way. In box diagram architecture format it would look like this:


The main downside of this is that the OS vendor's QA department has twice as much work as they need to validate both ABI versions of the product. There is also probably a fair bit of work work to make the two version work together seamlessly, but once you have that you can do all sorts of cool things, such as building the outer version with stdlibc++'s debug mode enabled. Normally you can't do that easily as it massively breaks ABI, but now it is easy. You can also build the host with address or memory sanitizer enabled for extra security (or just debugging).

If you add something like btrfs subvolumes and snapshotting and you can do all sorts of cool things. Suppose you have a very simple system with a web server and a custom backend application that you want to upgrade to the new ABI version. It could go something like this:

  1. Create new btrfs subvolume, install new version to that and set up the current install as the inner "Flatpak" host.
  2. Copy all core system settings to the outer install.
  3. Switch the main subvolume to the new install, reboot.
  4. Now the new ABI environment is running and usable but all apps still run inside the old version.
  5. Copy web server configuration to the outer OS and disable the inner one. This is easy because the all system software has the exact same version in both OS installs. Reboot.
  6. Port the business app to run on the new ABI version. Move the stored data and configuration to the outer version. The easiest way to do this is to have all this data on its own btrfs subvolume which is easy to switch over.
  7. Reboot. Done. Now your app has been migrated incrementally to the new ABI without intermediate breakage (modulo bugs).
The best part is that if you won't or can't upgrade your app to the new ABI, you can stop at step #5 and keep running the old ABI code until the whole OS goes out of support. The earlier ABI install will remain as is, can be updated with new RPMs and so on. Crucially this will not block others from switching to the new ABI at their leisure. Which is exactly what everyone wanted to achieve in the first place.