Most recent programming languages want to link all of their dependencies statically rather than using shared libraries. This has many implications, but for now we'll only focus on one: executable size. It is generally accepted that executables created in this way are bigger than when static linking. The question is how much and whether it even mattesr. Proponents of static linking say the increase is irrelevant given current computers and gigabit networks. Opponents are of the, well, opposite opinion. Unfortunately there is very little real world measurements around for this.
Instead of arguing about hypotheticals, let's try to find some actual facts. Can we find a case where, within the last year or so, a major proponent of static linking has voluntarily switched to shared linking due to issues such as bandwidth savings. If such a case can be found, then it would indicate that, yes, the binary size increase caused by static linking is a real issue.
Android WebView, Chrome and the Trichrome library
Last year (?) Android changed the way they provide both the Chrome browser and the System WebView app [1]. Originally both of them were fully isolated, but after the change both of them had a dependency on a new library called Trichrome, which is basically just a single shared library. According to news sites, the reasoning was this:
"Chrome is no longer used as a WebView implementation in Q+. We've moved to a new model for sharing common code between Chrome and WebView (called "Trichrome") which gives the same benefits of reduced download and install size while having fewer weird special cases and bugs."
Google has, for a long time, favored static linking. Yet, in this case, they have chosen to switch from static linking to shared linking on their flagship application on their flagship operating system. More importantly their reasons seem to be purely technical. This would indicate that shared linking does provide real world benefits compared to static linking.
Space increase by static linking is a myth. It is mostly due to fat libraries like GNU libc which are optimized for dynamic linking.
ReplyDeletehttps://web.archive.org/web/20110719144310/https://9fans.net/archive/2008/11/142
"Sun's first implementation" (from the 80s?) probably does not have much relevance to toolchains of today.
DeleteIn comparison a few years ago I worked on a fairly simple thing that took a few hundred kilobytes in C. We were told to rewrite it in Go. The end result was 15 megabytes in size.
The point of that post: Dynamic linking was invented for dynamic loading of modules, not for saving space. Like browser plugins, a feature that since Java applets and Flash is not considered a good idea any more.
DeleteThe size is mostly because Go is still relying on a lot of ancient libraries which are not optimized for static linking. They are not designed in a way that allows the compiler to throw out unused code, e.g. by putting functions into a separate object files. The same is true for Rust.
There are some libc implementations like libmusl that are more suitable for that, but not every library on modern Linux system has a suitable equivalent.
"fat libraries which are optimized for dynamic linking" sounds a lot like No true Scotsman argument to me.
Delete