If modern programming had the equivalent of a Holy Bible, you might find something almost identical to Genesis 11:4-9, which tells the story of the Tower of Babel: humans sure were getting great things done when there was only one language. They aspired to build this nifty tower that was going to be so high, because…you know. They could open a nice restaurant with a view of all of the kingdoms of man. But God wasn’t having any of that: too godlike. So he knocked man off his high horse, and scattered him to the winds, and cursed him with 6,906 distinct languages1. Talk about confusing! How was man ever to get anything done now?
Sometimes people ask me why there are so darn many computer languages. Wouldn’t we be better off if we only had one? Seems very inefficient. Like so many things, the answer is not simple. If there were only one computer language, there would be a kind of increased efficiency: with only one language to learn, people could collaborate more easily, and the pool of programmers for any given project would always be greater. That’s not the whole story, though: that universality would come with a cost.
Traditionally, some languages have just been better suited to certain problems than others. As languages become more and more sophisticated, and there’s more cross-pollination between languages, that’s becoming less true. For example, 10 years ago, Perl was king when it came to text manipulation; now almost all languages offer string manipulation libraries that are at least as good as Perl, sometimes better. First-class functions, once a feature available only on niche academic languages and (surprisingly), JavaScript, are now offered by almost all modern languages. Java, a notable holdout, got lambda functions and functional interfaces in Java 8 last year.
There are still some languages that are uniquely suited to certain applications. For example, humble C is still the king of embedded application development (though more and more microprocessors are providing higher-level language support). For statistical analysis, R is where it’s at (or it’s commercial uncle, MATLAB). And so on.
But increasingly, it isn’t the features of the language itself that drive people to choose one language over another (since the important language features are becoming increasingly ubiquitous). So what, then? If all languages are essentially equivalent2, why choose Java over C# or Ruby over PHP? The answer is usually “inertia.”
For example, if you’re a bank, chances are, you’re going to pick Java. Not because Java is better than C# for doing financial computation, but because the financial industry is mostly built on Java. Programmers who know Java and finance are easier to find. Libraries and frameworks that are designed for financial computation are usually written in Java. You pick Java because it’s the path of least resistance.
Does that mean that a bank shouldn’t start a new project in a different language? Not necessarily; the benefits may outweigh the costs. For example, the Portland-based bank Simple (which is amazing, by the way — I would never go back to a traditional bank now) is using Ruby, Scala, and Clojure in addition to the financial industry standard Java.
My next few blog posts are going to explore some of the “hot” new programming languages by contrasting them to the languages they’re intended to (or most likely to) eventually replace. Keep in mind, however, that mature languages evolve, and are definitely trying to keep up with nimble new languages, so with some rare exceptions, there are no clear victors here, only healthy competition.
Come back later this week for a discussion about JavaScript (surprise surprise) as compared to PHP.
1: The most recent count, according to Ethnologue.
2: As a matter of fact, most languages are considered Turing complete. If two languages A and B are both Turing complete, any problem that can be solved in A can also be solved in B (though the relative ease may be different).