Sunday, September 13, 2020

Proposal for a computer game research topic: the walk/game ratio

I used to play a fair bit of computer games but in the recent years the amount of time spent on games has decreased. Then the lockdown happened and I bought a PS4 and got back into gaming, which was fun. As often is the case, once guy get back into something after a break you find yourself paying attention to things that you never noticed before.

In this particular case it was about those parts of games where you are not actually playing the game. Instead you are walking/driving/riding from one place to another because the actual thing you want to do is somewhere else. A typical example of this is Red Dead Redemption II (and by extension all GTAs). At first wondering the countryside is fun and immersive but at some point it becomes tedious and you just wish to be at your destination (fast travel helps, but not enough). Note that this does not apply to extra content. Having a lush open sandbox world that people can explore at their leisure is totally fine. This is about "grinding by walking" that you have to do in order to complete the game.

This brings up several interesting questions. How much time, on average, do computer games require players to spend travelling from one place to another as opposed to doing the thing the game is actually about (e.g. shooting nazis, shooting zombie nazis, hunting for secret nazi treasure and fighting underwater nazi zombies)? Does this ratio vary over time? Are there differences between genres, design studios and publishers? It turns out that determining these numbers is fairly simple but laborious. I have too many ongoing projects to do this myself, so here's a research outline for anyone to put in their research grant application:

  1. Select a representative set of games.
  2. Go to speedrun.com and download the fastest any-% glitchless run available.
  3. Split the video into separate segments such as "actual gameplay", "watching unskippable cutscenes", "walkgrinding" and "waiting for game to finish loading".
  4. Tabulate times, create graphs

Hypothetical results

As this research has not been done (that I know of and am able to google up) we don't know what the results would be. That does not stop us from speculating endlessly, so here are some estimates that this research might uncover:
  • Games with a lot of walkgrdinding: RDR II, Assassin's Creed series, Metroid Prime.
  • Games with a medium amount of walkgrinding: Control, Uncharted
  • Games with almost no walkgrinding: Trackmania, Super Meat Boy.
  • Games that require the player to watch the same unskippable cutscenes over and over: Super Mario Sunshine
  • Newer games require more walkgrinding simply because game worlds have gotten bigger

Sunday, August 30, 2020

Resist the urge to argue about app store security

Recently Miguel de Icaza wrote a blog post arguing that closed computing platforms where a major US corporation decides what software users are allowed to install are a good thing. This has, naturally, caused people to become either confused, disappointed or angry. Presumably many people are writing responses and angry comments. I almost started one writing one pointing out all the issues I found in the post.

Doing that would probably create a fairly popular blog post with followups. It might even get to the reddits and hackernewses and generate tons of comments where people would duke it out on issues on user choice vs the safety provided by a curated walled garden. There would be hundreds, if not thousands, of snarky tweets that make their poster feel superior for a while but are ultimately not productive. To quote Star Trek Deep Space Nine:
Spare me please-think-of-the-children speech and I'll spare you the users-must-have-control-over-their-own-devices speech [1].
Having us, the user and developer community, argue about this issue is pointless, unproductive and actively harmful. This particular phenomenon is not new, it even has a name. In fact this approach is so old that the name is in latin: Divide et impera. Divide and conquer. All the time and energy that we spend on arguing this issue among ourselves is time not spent on working towards a solution.

The actual solution to this issue is conceptually so simple it could be called trivial. The entire problem at hand is one that has been created by Apple. They are also the ones that can solve it. All they have to do is to add one new piece of functionality to iOS devices. Specifically that users who so choose, can change an option in the device they own allowing them to download, install and use any application binaries freely from the Internet. Enabling this functionality could be done, for example, in a similar way to how Android phones enable developer mode. Once implemented Apple would then make a public statement saying that this workflow is fully supported and that applications obtained in this way will, now and forevermore, have access to all the same APIs as official store apps do.

This is all it takes! Further, they could make it so that IT departments and concerned parents could disable this functionality on their employees' and children's devices so that they can only obtain apps via the app store. This gives both sets of users exactly what they want. Those who prefer living in a walled curated garden can do so. Those with the desire and willingness to venture outside the walls and take responsibility of their own actions can do so too and still get all the benefits of sandboxing and base platform security.

Apple could do this. Apple could have done this at launch. Apple could have done this at any time since. Apple has actively chosen not to do this. Keep this is mind if you ever end up arguing about this issue on the Internet. People who have different priorities and preferences are not "the enemy". If you get into the flaming and the shouting you have been divided. And you have been conquered.

[1] Might not be a word-for-word accurate transcription.

Friday, August 28, 2020

It ain't easy being a unique snowflake, the laptop purchasing edition

As the lockdown drags on I have felt the need to buy a new laptop as my old one is starting to age. A new category in the laptop market is 2-in-1 laptops with full drawing pen support. It has been a long time since I have done any real drawing or painting and this seemed like a nice way to get back into that. On the other hand the computer also needs to be performant for heavy duty software development such as running the Meson test suite (which is surprisingly heavy) and things like compiling LLVM. This yields the following requirements:

  • A good keyboard, on the level of Thinkpads or 2015-ish Macbook Pros
  • 13 or 14 inch screen, at least Retina level resolution, HDR would be nice, as matte as possible
  • Pressure and tilt support for the pen
  • At least 16GB of RAM
  • USB A, C and HDMI connectors would be nice
  • At least Iris level graphics with free drivers (so no NVidia)
  • A replaceable battery would be nice
  • Working dual boog (I need Win10 for various things), full Linux support (including wifi et al)
After a bunch of research it turns out that this set of requirements might be impossible to fullfill. Here are the tree best examples that I found.

HP Elite Dragonfly

The documentation is unclear on whether the pen supports tilting. More importantly this model is only available with Intel 8th gen processors and online reviews say the screen is super glossy.

Lenovo Yoga X1 Gen5

Pen does not support tilting. The graphics card is a standard Intel HD, not Iris.

Dell XPS 13 2-in-1 7390

This is the one that gets the closest. There is no USB A or HDMI connectors, and the pen requires an AAAA battery. This is annoying but tolerable. Unfortunately the keyboard is of the super thin type, which is just not usable.

So now what?

Probably going to stick with the current ones for now. New models come out at a fairly steady pace, so maybe this mythical white whale will become available for purchase at some point. Alternatively I might eventually just fold, give up on some requirements and buy a compromise machine. Typically this causes the dream machine to become available for purchase immediately afterwards, when it is too late. 

Monday, August 17, 2020

Most "mandatory requirements" in corporations are imaginary

In my day job I work as a consultant. Roughly six months ago my current client had a non-negotiable requirement that consultants are not allowed to work remotely. Employees did have this privilege. This is, clearly, a stupid policy and there have been many attempts across the years to get it changed. Every time someone in the management chain has axed the proposal with some variation of "this policy can not be changed because this is our policy and thus can not be changed", possibly with a "due to security reasons" thrown in there somewhere.

Then COVID-19 happened and the decision came to lock down the head office. Less than a day after this everyone (including consultants) got remote working rights, VPN tokens and all other bits and bobs. The old immutable, mandatory, unalterable and just-plain-how-we-do-things rules and regulations seemed to vanish to thin air in the blink of an eye. The question is why did this happen?

The answer is simple: because it became mandatory to due to external pressure. A more interesting question would be if it really was that simple, how come this had not happened before? Further, are the same reasons that blocked this obvious improvement for so long are also holding back other policy braindeadisms that reduce employee productivity. Unfortunately the answers here are not as clear-cut and different organizations may have different underlying causes for the same problem.

That being said, let's look at one potential causes: the gain/risk imbalance. Typically many policy and tech problems occur at a fairly low level. Changes benefit mostly the people who, as one might say, are doing the actual work. In graph form it might look like this.


This is not particularly surprising. People higher up the management chain have a lot on their plate. They probably could not spend the time to learn about benefits from low level work flow changes even if they wanted to and the actual change will be invisible to them. On the other hand managers are fairly well trained to detect and minimize risk. This is where things fall down because the risk profile for this policy change is the exact opposite.


The big question on managers' minds (either consciously or unconsciously) when approving a policy change is "if I do this and anything goes wrong, will I get blamed?". This should not be the basis of choice but in practice it sadly is. This is where things go wrong. The people who would most benefit from the change (and thus have the biggest incentive to get it fixed) do not get to make the call. Instead it goes to people who will see no personal benefit, only risk. After all, the current policy has worked well thus far so don't fix it if it is not broken. Just like no one ever got fired for buying IBM, no manager has ever gotten reprimanded for choosing to uphold existing policy.

This is an unfortunate recurring organizational problem. Many policies are chosen without full knowledge of the problem sometimes this information is even impossible to get if the issue is new and no best practices have yet been discovered. Then it becomes standard practice and then a mandatory requirement that can't be changed, even though it is not really based on anything, provides no benefit and just causes harm. Such are the downsides of hierarchical organization charts.

Saturday, August 8, 2020

The second edition of the Meson manual is out

I have just uploaded the second edition of the Meson manual to the web store for your purchasing pleasure. New content includes things like:

  • Reference documentation updated to match 0.55
  • A new chapter describing "random tidbits" for practical usage
  • Cross compilation chapter update with new content
  • Bug fixes and minor updates
Everyone who has bought the book can download this version (and in fact all future updates) for free.

Sales stats up to this point

The total number of sales is roughly 130 copies corresponding to 3500€ in total sales. Expenses thus far are a fair bit more than that.

Wednesday, July 29, 2020

About that "Google always builds everything from source every time" thing

One of the big battles of dependencies is whether you should use somehow prebuilt libraries (e.g. the Debian-style system packaging) or vendor all the sources of your deps and build everything yourself. Whenever this debat gets going, someone is going to do a "well, actually" and say some kind of variation of this:
Google vendors all dependencies and builds everything from source! Therefore that is clearly the right thing to do and we should do that also.
The obvious counterargument to this is the tried-and-true if all your friends jumped off a bridge would you do it too response known by every parent in the world. The second, much lesser known counterargument is that this statement is not actually true.

Google does not actually rebuild all code in their projects from source. Don't believe me? Here's exhibit A:


The original presentation video can be found here. Note that the slide and the video speak of multiple prebuilt dependencies, not just one [0]. Thus we find that even Google, with all of their power, money, superior engineering talent and insistence to rebuild everything from source, do not rebuild everything from source. Instead they have to occasionally use prebuilt third party libraries just like the rest of the world. Thus a more accurate form of the above statement would be this:
Google vendors most dependencies and builds everything from source when possible and when that is not the case they use prebuilt libraries but try to keep quiet about it in public because it would undermine their publicly made assertion that everyone should always rebuild everything from source.
The solution to this is obvious: you just rebuild all things that you can from sources and get the rest as prebuilt libraries. What's the big deal here? By itself there would not be, but this ideology has consequences. There are many tools and even programming languages designed nowadays that only handle the build-from-source case because obviously everyone has the source code for all their deps. Unfortunately this is just not true. No matter how bad prebuilt no-access-to-source libraries are [1], they are also a fact of life and must be natively supported. Not doing that is a major adoption blocker. This is one of the unfortunate side effects of dealing with all the nastiness of the real world instead of a neat idealized sandbox.

[0] This presentation is a few years old. It is unknown whether there are still prebuilt third party libraries in use.

[1] Usually they are pretty bad.

Sunday, July 26, 2020

Pinebook Pro longer term usage report

I bought a Pinebook Pro in the first batch, and have been using it on and off for several months now. Some people I know wanted to know if it is usable as a daily main laptop.

Sadly, it is not. Not for me at least. It is fairly close though.

Random bunch of things that are annoying or broken

I originally wanted to use stock Debian but at some point the Panfrost driver broke and the laptop could not start X. Eventually I gave up and switched to the default Manjaro. Its installer does not support an encrypted root file system. A laptop without an encrypted disk is not really usable as a laptop as you can't take it out of your house.

The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs. As an extreme example switching between channels in Slack takes five to ten seconds. It is unbearably slow. The wifi is not very good, it can't connect reliably to an access point in the next room (distance of about 5 meters). The wifi behaviour seems to be distro dependent so maybe there are some knobs to twiddle.

Video playback on browsers is not really nice. Youtube works in the default size, but fullscreen causes a massive frame rate drop. Fullscreen video playback in e.g. VLC is smooth.

Basic shell operations are sluggish too. I have a ZSH prompt that shows the Git status of the current directory. Entering in a directory that has a Git repo freezes the terminal for several seconds. Basically every time you need to get something from disk that is not already in cache leads to a noticeable delay.

The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it. Scale of 1 is a bit too small and 2 is way too big. The screen is matte, which is totally awesome, but unfortunately the colors are a bit muted and for some reason it seems a bit fuzzy. This may be because I have not used a sub-retina level laptop displays in years.

The trackpad's motion detector is rubbish at slow speeds. There is a firmware update that makes it better but it's still not great. According to the forums someone has already reverse engineered the trackpad and created an unofficial firmware that is better. I have not tried it. Manjaro does not provide a way to disable tap-to-click (a.k.a. the stupidest UI misfeature ever invented including the emojibar) which is maddening. This is not a hardware issue, though, as e.g. Debian's Gnome does provide this functionality. The keyboard is okayish, but sometimes detects keypresses twice, which is also annoying.

For light development work the setup is almost usable. I wrote a simple 3D model viewer app using Qt Creator and it was surprisingly smooth all round, the 3D drivers worked reliably and so on. Unfortunately invoking the compiler was again sluggish (this was C++, though, so some is expected). Even simple files that compile instantly on x86_64 took seconds to build.

Can the issues be fixed?

It's hard to say. The Panfrost driver is under heavy development, so it will probably keep getting better. That should fix at least the video playback issues. Many of the remaining issues seem to be on the CPU and disk side, though. It is unknown whether there are enough optimization gains to be had to make the experience fully smooth and, more importantly, whether there are people doing that work. It seems feasible that the next generation of hardware will be fast enough for daily usage.

Bonus content: compiling Clang

Just to see what would happen, I tried whether it would be possible to compile Clang from source (it being the heaviest fairly-easy-to-build program that I know of). It turns out that you can, here are the steps for those who want to try it themselves:
  • Checkout Clang sources
  • Create an 8 GB swap file and enable it
  • Configure Clang,  add -fuse-ld=gold to linker flags (according to Clang docs there should be a builtin option for this but in reality there isn't) and set max parallel link jobs to 1
  • Start compilation with ninja -j 4 (any more and the compilation jobs cause a swap storm)
  • If one of the linker jobs cause a swap storm, kill the processes and build the problematic library by hand with ninja bin/libheavy.so
  • Start parallel compilation again and if it freezes, repeat as above
After about 7-8 hours you should be done.