So, a couple of days ago I had a good discussion on Twitter with a developer that I respect regarding the future of JS. This was in response to this article, which basically suggests that within the next few years JS will become the defacto programming language on both the client and server. Now, I’ve read that article, and re-read it, and re-read it again. I still don’t agree with it. Let’s discuss both client and server sides as separate entities…
There are already three programming languages out there that compile down to JS – CoffeeScript (CS), TypeScript (TS) and Dart. Dart has slightly loftier goals in that it sees itself in the long-term as a complete replacement for JS, with it’s own interpreter / compiler in the browser, but in the short-term it fulfils the same purpose as the other two. CS uses a completely different syntax to JS, whilst TS is a superset of JS, meaning that the barrier to entry is extremely low. At any rate, the fact that these three languages exist tells us something – that JS has problems writing large-scale applications. It has poor facilities for abstraction, module discovery and generally for reasoning about your code that allows you to do simple things like rename a variable with any degree of certainty. Developers at places like Google have had to come up with ridiculous naming standards of variables and classes etc. to allow people to infer scoping or usage of types etc.. I thought we dropped Hungarian notation with it’s szCustomerName ridiculousness for good in the 90s, but evidently not.
So where am I going with this rant? Simply that whilst JS will almost certainly stay as the most popular language that web applications run on, it won’t be the dominant programming language for developers. I certainly wouldn’t want to write the next Facebook, Gmail or whatnot in plain JS. It’s a nightmare to maintain and, without basic constructs like interfaces, classes and modules doesn’t easily allow for organising code into manageable chunks. I’m not saying it can’t be done, but it involves lots of extra work by the development team – things that should be given to us by a modern language (cast your mind back to what JS was originally designed for – it certainly wasn’t to write Google Maps).
On the server-side things get even more bizarre. For starters, not only do you have all the same issues as above (which in todays world are likely to become even more of an issue as big data problems get both more complex and commonplace), but a whole host of other issues rear their head.
Firstly, server side applications tend to be much more mission critical than a front-end application from a data point of view. A bug on your website may only result in displaying some data incorrectly – this is a transient issue that can be fixed with no permanent damage. Conversely a bug on the server may end up actually calculating the wrong data and persisting it to your data store. Because of the nature of server-side applications, you might not even notice this until much later. Indeed, with JS, issues like accessing the wrong field because of a typo don’t cause a runtime error – it’ll just carry on happily. More so, with the relatively immature nature of JS testing frameworks, these sorts of errors might be even more prevalent. And I don’t want to generalise too much, but how many front-end JS developers would be confident writing a server-side application in a test-first manner?
What about features that you would expect to see in a language that was going to be used to write a mission-critical server-side application? How about excellent multithreading support? No, wait, how about any multithreading support? What about simple to use and powerful asynchronous support (I’m talking like that in C#, F# and I believe Python have now)? The ability to know with certainty where a particular object is being used? I just don’t understand why you would want to give up all of those language features, let along features like test frameworks – or tooling – that people have grown accustomed to.