I read a book titled Liar’s Poker some time back. It was written by Michael Lewis and it chronicled his stint at Salomon Brothers. One of the terms used to characterize traders who raked in money for the company was Big Swinging You know What. I added the You Know What, but you get the idea. The term was meant to describe the sort of pride that haloed a successful trader.
The point is that programmers–both women and men–can sometimes wear their preferred language like a halo, looking down on the other less endowed. One of my favorite lines about this hierarchy within the field of programming comes from Steve Yegge, a satirical blogger/rant artist/ software engineer. Here’s his description of what he coins the DAG of Disdain:
“At Google, most engineers are too snooty to do mobile or web programming. ‘I don’t do frontend’, they proclaim with maximal snootiness.
There’s a phenomenon there that I like to call the ‘DAG of Disdain’, wherein DAG means Directed Acyclic Graph, which is a bit like a flowchart.
And Search is cooler than Ads, which is cooler than Apps, which is cooler than Tools, which is cooler than Frontends. And so on. Programmers love to look down on each other. And if you’re unlucky enough to be a Google mobile engineer, you’re stuck scuffling around at the bottom of several totem poles with everyone looking down on you.”
Of course this is at Google, which isn’t exactly known for its robust UI. It’s known for its powerhouse of a search engine with marketers talking about its new algorithms like they talk about restaurant specials. But just like the bond traders at Solomon, the demand for C++ programmers is high, which drives up their value, and, for some, their egos.
They live in their paradisaical niche of compiled programming languages lording it over those who share their realm and batting a slanted eye at that other realm: the interpreted languages and the jank often associated with them.
So, when it comes down to it, what is better? A compiled programming language? Or an interpreted programming language? A few developers actually have a friendly discussion about this matter. Here are their responses.
On The Fence
At the risk of being pedantic, there’s no reason you can’t have both a compiled and interpreted implementation of a language – for example, Haskell has GHC, which compiles down to native code, and the lapsed Hugs implementation, which is an interpreted implementation of Haskell. Scheme and ML are other languages which have both interpreted and compiled implementations.
Then you get into interesting territory with languages like Java, which are compiled to bytecode, and the bytecode is interpreted (and possibly just-in-time compiled) at runtime. A lot of interpreted languages – Python, Ruby, Lua – actually compile to bytecode and execute that when you run a script.
Performance is a big factor when it comes to interpreted vs compiled – the rule of thumb is that compiled is faster than interpreted, but there are fancy interpreted systems which will generate faster code (I think some commercial Smalltalk implementations do this).
One nice thing about compiling down to native code is that you can ship binaries without needing to deploy a runtime; this is one of Go’s strongest features, in my opinion!
Ideally, it would be like Common Lisp in that it has interpretation and compilation both built in from the start. ANSI Common Lisp described it as having “the whole language there all the time”: you can compile code at runtime, or run code at compile time (which allows for lots of metaprogramming).
When you want fast iterative development, you want interpreted code. In production (and especially on resource-constrained devices), you want code to run as fast and memory-efficiently as possible, so you want it compiled. The ideal language would make it effortless to transition between the two as needed.
It all depends on the intended purpose.
- Are faster at runtime
- Conceal source code
- Have associated compile time
- Are better when you’re not making frequent changes to the code, and care a lot about runtime speed
- Are slower at runtime
- Have open source code, but that code can be obfuscated (minification, uglification, etc)
- Don’t have to compile before use, but can have an initial parse time that’s typically much faster than compile time
- Are better when you are making frequent changes to the code, and don’t care as much about runtime speed
There are also factors regarding whether you need additional software to be able to run the code, but languages like Java (compiled but still needs the JVM) kinda muddy the waters on this.
For Interpreted Languages
Ben Halpbern, in response to Rob Hoelz:
I don’t think this is pedantic, I feel like it’s a great evaluation of the whole question.
I personally take the give and take of each scenario and don’t draw the line for my own uses.
In my life these days, I’ve been writing Ruby and JS for the most part but a bit more Swift for iOS lately. I don’t like that I have to wait to compile and run when I write native in Swift, but I accept it as part of the world I’m in when I work with this tool.
I hope in the long run that good work keeps going into making interpreted code more performant and compiled code easier to work with.
I don’t like that I have to wait to compile and run when I write native in Swift
That’s one of the “selling” points for Go, the fast compilation times, in my experience it invalidates this:
I’m sure Swift compilation phase is also “slowed” from the enourmous amount of stuff you have to compile for a mobile app to function 😀
For Compiled Languages
For example, if I change a variable name and forget to update the code that used that variable, a compiled language will fail to compile, and I am forced to fix it everywhere.
In an interpreted language, even one that is preprocessed using something like Webpack, you can’t know for sure that you’ve renamed the variable everywhere until you start getting errors at runtime.
You can mitigate it with static analyzers, but that’s just an extra thing you have to get set up, which comes for free with compiled languages.
Compiled languages just win on everything (performance, reliability,
But, most famous enterprise backend langs (Python, C#, Java… etc) are using both of the best worlds… they get compiled into some special binary format, then an interpreter on any platform (android, ios, win, linux) gets that special code and execute it.
All that to achieve super cross-platform, and they do work on virtually all devices.
Ah, JS is an exception 😀
C / C++ are just faster than any other lang (given we use the latest compiler optimizations and the right implementation of the software)… Perl interpreter itself is implemented in C & C++.
All the people here agree on this point: news.ycombinator.com/item?id=8626131
Ditto for reliability. I can’t see C being more reliable than Haskell, for instance.
From my experience, run-time errors occur more in interpreted languages than compiled languages. I do Python and C#, and I face lots of run-time errors in Python than C#.
Interpreted languages can often give you more instant feedback when coding. But they also typically have slower code and you might have to ship your code in source (or obfuscated) form with the run-time interpreter which is often less than ideal.
Then there are intermediate languages that are compiled a bit, but still use a run-time. Java and C# come to mind here.
Then there are fully compiled languages. Compiling your code to native machine code is nice for source security. Performance is often better as well, but the compile process can take time.
I guess I would prefer a fast, compiled language these days.
A lot of people are arguing about performance.
Assuming good implementation:
Compiled languages are faster in general because they don’t need to run through any intermediate interpreter. Instead of finding out what needs to be done and then doing it, at run time the program just does what needs to be done.
In an interpreted language, the program needs to first figure out what needs to be done (be interpreted), then it can go and do it. The sorts of optimizations that this allows can reduce the interpretation overhead, but I can’t think of a practical and useful way it can turn that overhead around into an advantage.
Compiled languages always have a superior performance and require much less resources at runtime. Interpreted languages are neat while debugging, but if they don’t have a compiler, they’re out for me.
What Are Your Thoughts?
Please follow and like us: