Every now and then a discussion arises on whether some piece of scripting functionality should be written as a basic shell script or in Python. Here is a handy step-by-step guide to answering the question. Start at the beginning and work your way down until you have your answer.
Will the script be longer than one screenful of text (80 lines)? If yes ⇒ Python.
Will the script ever need to be run on a non-unix platform such as Windows (including Cygwin)? If yes ⇒ Python.
Does the script require command line arguments? If yes ⇒ Python.
Will the script do any networking (e.g. call curl or wget)? If yes ⇒ Python.
Will the script use any functionality that is in Python standard library but is not available as a preinstalled binary absolutely everywhere (e.g. compressing with 7zip)? If yes ⇒ Python.
Will the script use any control flow (functions/while/if/etc)? If yes ⇒ Python.
Will the script have modifications in the future rather than being written once and then dropped? If yes ⇒ Python.
Will the script be edited by more than one person? If yes ⇒ Python.
Does the script call into any non-trivial Unix helper tool (awk, sed, etc)? If yes ⇒ Python.
Does the script call into any tool whose behaviour is different on different platforms (e.g. mkdir -p, some builtin shell functionality)? If yes ⇒ Python.
Does the script need to do anything in parallel? If yes ⇒ Python.
If none of the above match then you have a script that proceeds in a straight line calling standard and simple Unix commands and never needs to run on a non-unix platform. In this case a shell script might be ok. But you might still consider doing it in Python, because scripts have a tendency to become more and more complex with time.
A gathering of development thoughts of Jussi Pakkanen. Some of you may know him as the creator of the Meson build system. jpakkane at gmail dot com
Saturday, October 31, 2015
Friday, October 30, 2015
Proposal for launching standalone apps on Linux
One nagging issue Linux users often face is launching third party applications, that is, those that do not come from package repositories. As an example the Pitivi video editor provides daily bundles that you can just download and run . Starting it takes a surprising amount of effort and may require going to the command line (which is an absolute no-no if your target audience includes regular people).
This page outlines a simple, distro agnostic proposal for dealing with the problem. It is modelled after OSX application bundles (which are really just directories with some metadata) and only requires changes to file managers. It is also a requirement that the applications must be installable and runnable without any priviledge escalation (i.e. does not require root).
The core concept is an application bundle. The definition of an app bundle named foobar is a subdirectory named foobar.app which contains a file called foobar.desktop. There are no other requirements for the bundle and it may contain arbitrary files and directories.
When a file manager notices a directory that matches the above two requirements, and thus is a bundle, it must treat it as an application. This means the following:
This page outlines a simple, distro agnostic proposal for dealing with the problem. It is modelled after OSX application bundles (which are really just directories with some metadata) and only requires changes to file managers. It is also a requirement that the applications must be installable and runnable without any priviledge escalation (i.e. does not require root).
The core concept is an application bundle. The definition of an app bundle named foobar is a subdirectory named foobar.app which contains a file called foobar.desktop. There are no other requirements for the bundle and it may contain arbitrary files and directories.
When a file manager notices a directory that matches the above two requirements, and thus is a bundle, it must treat it as an application. This means the following:
- it must display it as an application using the icon specified in the desktop file rather than as a subdirectory
- when double clicked, it must launch the app as specified by the desktop file rather than showing the contents of the subdir
Monday, October 12, 2015
Some comments on the C++ modules talk
Gabriel Dos Reis had a presentation on C++ modules at cppcon. The video is available here. I highly recommend that you watch it, it's good stuff.
However as a build system developer, one thing caught my eye. The way modules are used (at around 40 minutes in the presentation) has a nasty quirk. The basic approach is that you have a source file foo.cpp, which defines a module Foobar. To compile this you should say this:
cl -c /module foo.cxx
The causes the compiler to output foo.o as well as Foobar.ifc, which contains the binary definition of the module. To use this you would compile a second source file like this:
cl -c baz.cpp /module:reference Foobar.ifc
This is basically the same way that Fortran does its modules and it suffers from the same problem which makes life miserable for build system developers.
There are two main reasons. One: the name of the ifc file can not be known beforehand without scanning the contents of source files. The second is that you can't know what filename to give to the second command line without scanning it to see what imports it uses _and_ scanning potentially every source file in your project to find out what file actually provides it.
Most modern build systems work in two phases. First you parse the build definition and determine how and which order to do individual build steps in. Basically it just serialises the dependency DAG to disk. The second phase loads the DAG, checks its status and takes all the steps necessary to bring the build up to date.
The first phase of the two takes a lot more effort and is usually much slower than the second part. A typical ratio for a medium project is that first phase takes roughly ten seconds of CPU time and the second step takes a fraction of a second. In contemporary C++ almost all code changes only require rerunning the second step, whereas changing build config (adding new targets etc) requires doing the first step as well.
This is caused by the fact that output files and their names are fully knowable without looking at the contents of the source files. With the proposed scheme this no longer is the case. A simple (if slightly pathological) example should clarify the issue.
Suppose you have file A that defines a module and file B that uses it. You compile A first and then B. Now change the source code so that the module definition goes to B and A uses it. How would a build system know that it needs to compile B first and only then A?
The answer is that without scanning the contents of A and B before running the compiler this is impossible. This means that to get reliable builds either all build systems need to grow a full C++ parser or all C++ compilers must grow a full build system. Neither of these is particularly desirable. Even if build systems got these parsers they would need to reparse the source of all changed files before starting the compiler and it would need to change the compiler arguments to use. This makes every rebuild take the slow path of step one instead of the fast step two.
There are a few possible solutions to this problems, none of which are perfect. The first is the requirement that module Foo must be defined in a source file Foo.cpp. This makes everything deterministic again but has that iffy Java feeling about it. The second option is to define the module in a "header-like" file rather than in source code. Thus a foo.h file would become foo.ifc and the compiler could pick that up automatically instead of the .h file.
However as a build system developer, one thing caught my eye. The way modules are used (at around 40 minutes in the presentation) has a nasty quirk. The basic approach is that you have a source file foo.cpp, which defines a module Foobar. To compile this you should say this:
cl -c /module foo.cxx
The causes the compiler to output foo.o as well as Foobar.ifc, which contains the binary definition of the module. To use this you would compile a second source file like this:
cl -c baz.cpp /module:reference Foobar.ifc
This is basically the same way that Fortran does its modules and it suffers from the same problem which makes life miserable for build system developers.
There are two main reasons. One: the name of the ifc file can not be known beforehand without scanning the contents of source files. The second is that you can't know what filename to give to the second command line without scanning it to see what imports it uses _and_ scanning potentially every source file in your project to find out what file actually provides it.
Most modern build systems work in two phases. First you parse the build definition and determine how and which order to do individual build steps in. Basically it just serialises the dependency DAG to disk. The second phase loads the DAG, checks its status and takes all the steps necessary to bring the build up to date.
The first phase of the two takes a lot more effort and is usually much slower than the second part. A typical ratio for a medium project is that first phase takes roughly ten seconds of CPU time and the second step takes a fraction of a second. In contemporary C++ almost all code changes only require rerunning the second step, whereas changing build config (adding new targets etc) requires doing the first step as well.
This is caused by the fact that output files and their names are fully knowable without looking at the contents of the source files. With the proposed scheme this no longer is the case. A simple (if slightly pathological) example should clarify the issue.
Suppose you have file A that defines a module and file B that uses it. You compile A first and then B. Now change the source code so that the module definition goes to B and A uses it. How would a build system know that it needs to compile B first and only then A?
The answer is that without scanning the contents of A and B before running the compiler this is impossible. This means that to get reliable builds either all build systems need to grow a full C++ parser or all C++ compilers must grow a full build system. Neither of these is particularly desirable. Even if build systems got these parsers they would need to reparse the source of all changed files before starting the compiler and it would need to change the compiler arguments to use. This makes every rebuild take the slow path of step one instead of the fast step two.
Potential solutions
There are a few possible solutions to this problems, none of which are perfect. The first is the requirement that module Foo must be defined in a source file Foo.cpp. This makes everything deterministic again but has that iffy Java feeling about it. The second option is to define the module in a "header-like" file rather than in source code. Thus a foo.h file would become foo.ifc and the compiler could pick that up automatically instead of the .h file.
Subscribe to:
Posts (Atom)