But to anyone who has ever wondered whether using m4 macros to configure autoconf to write a shell script to look for 26 Fortran compilers in order to build a Web browser was a bit of a detour, Brooks offers well-reasoned hope that there can be a better way.
Today’s Unix/Posix-like operating systems, even including IBM’s z/OS mainframe version, as seen with 1980 eyes are identical; yet the 31,085 lines of configure for libtool still check if <sys/stat.h> and <stdlib.h> exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code.
This is commonly believed, but in fact, a type checker introduces bugs. It shifts the cost of a certain class of mistakes so much that the kinds of mistakes people make in the presence of a type checker destroy their ability to think clearly about types.
Experience strongly suggests that people who use strongly-typed languages and have compilers who produce informative error messages when type constraints are violated, cause their programmers to believe several idiotic ideas: (1) Type errors are important. They are not. You would not make them if the cost of making type errors was higher. (2) Satisfying the compiler is no longer only an inconsequential necessary condition for a program to be correct, it becomes a sufficient condition in the minds of those who make the first mistake. (3) Errors in programs are separated into two kinds of vastly different nature: static errors (which the compiler may report) and dynamic errors (which the compiler cannot find).
This false dichotomy completely warps the minds of progrrammers in these languages: Instead of becoming fundamentally stupid errors that the compiler should just go and fix, static errors become /more/ important than dynamic errors, leading to serious growth of dynamic errors because programmers tend to rely on the compiler for detection and correction of mistakes.
In my view, to be a programmer is to be sufficiently well versed in some non-computer-related field that you can see how the computer can aid practioners of that field accomplish their goals. Many programmers never progress beyond the point of aiding their own use of the computer and never do anything “real” – the number of software packages that help people read mail and news and waste enormous amount of time in front of the computer are legion, but they tend to make people spend /more/ time on these tasks than they would or should have done compared to actually productive tasks.
As for the argument that unit testing can replace strong typing, consider the common refactoring practice in strongly typed languages: changing the type of an argument of a particular function. In a strongly typed language, it’s enough to modify the declaration of that function and then fix all the build breaks. In a weakly typed language, the fact that a function now expects different data cannot be propagated to call sites. Unit testing may catch some of the mismatches, but testing is almost always a probabilistic rather than a deterministic process. Testing is a poor substitute for proof.
The only serious argument I hear against strong static type checking is that it might eliminate some programs that are semantically correct. In practice, this happens extremely rarely and, in any case, every language provides some kind of a backdoor to bypass the type system when that’s really necessary. Even Haskell has unsafeCoerce. But such devices should by used judiciously. Franz Kafka’s character, Gregor Samsa, breaks the type system when he metamorphoses into a giant bug, and we all know how it ends.
This is honestly the kind of crappy advice that has me paying little to no attention to the Haskell community any longer. A noisy subsdt of silly people has collectively gone nuts for abstractions that provide little value other than making them feel special. It is most disappointing.