There are many articles that claim to have constructed a list of the worst programming languages out there. The problem is that there is no quantifiable way to make a list of languages that are objectively horrible. Of course, that’s the same for many other variables like foods or movies. Yet rating systems exist using agreed upon rubrics. For example, poor acting and poor CGI within a particular action movie may make that movie score low on Rotten Tomatoes, which simply aggregates ratings from critics.
Still, rubrics alone do not weed out inherent subjectivity. There will almost always be disagreement about a particular movie or game or restaurant. Our insatiable desire to categorize everything into numbers and groups forces us to confront varying viewpoints. We consume lists and reviews to have our fears allayed, our beliefs confirmed, or our knowledge of the order of things expanded.
For some new to a programming language, they want to allay fears that the language of their choice isn’t either obsolete or considered horrendous. Seeing what the worst languages are may mean the avoidance of time wasted. That person may also have a high-paying job in mind. Others may simply be curious. The point is, how can a rubric be created to determine a “bad” programming language that would satisfy someone to the extent that movie reviews satisfy wary moviegoers?
I discussed this with a few developers on Dev.to and here were their responses.
The number one problem that makes a bad language is pitfalls. I define pitfalls as potential bugs that tend to get exposed only after they’ve done some damage. Usually because:
- It makes sense to expect it to work.
- It works in the simple cases.
- It doesn’t work in more complex cases.
- Once you found out it doesn’t work, you have already integrated it in your code – so you need to do lots of refactoring to fix it.
There are other reasons to not like a language, but most of them are a matter of personal taste. But I think everyone would agree that pitfalls are bad.
It’s extremely subjective and situational, but there are a lot of metrics that one could apply (though there’s not necessarily an objective way to measure them), and if a language scores low on everything, one might call it “bad”. But which metrics are important to you is going to vary quite a lot. Here are some I can think of:
- Manipulexity and whipupitude, Larry Wall’s terms for what he was trying for with Perl: fine grained control plus the ability to create a useful program quickly.
- Usefulness for large teams or large projects
- Succinctness (the ability to express the intent of the program with the fewest number of symbols)
- Backward/forward compatibility (If I write something today, will it run a year from now? Ten years? 10,000 years?)
- Welcoming culture
- Friendliness for beginners
- Ability to illustrate CS or programming concepts
- Ability to hire people who know it or are willing to learn it
- Ease of use for some problem domain (e.g. server-side web programming for PHP)
I’m probably forgetting some. But in general, if something scores low on all these (or at least the ones you care about), then you might be justified in calling it “bad”. But then what you care about might not be what the language authors care about.
There are also deliberately “bad” languages like Brainfu*k and Intercal. But maybe they’re good for their intended use, which might make them not really bad.
To reply to Dustin King’s excellent response, you can see, by looking at his list, how some may favor friendliness and strong community culture when they’re just starting out with a language. On the other hand, more seasoned developers might bash that same language for its “syntactic sugar” and its slow compile time. This creates a strong bias because many programming languages can’t avoid all the pitfalls that Ayre and King put forth. Diane Fay put it best when she, in responding to my question,
“When people say a programming language is “bad”, what they mean is that it’s difficult for them to do what they want to accomplish with it or that they have aesthetic objections to how programs are written and structured in it. There are some cases where most people agree (MUMPS is nigh-universally abhorred), but it’s still fundamentally a matter of opinion.”
Arye, King, and Fay all agree that, for the most part, the notion of a bad programming language is subjective. One can even go as far as saying that it’s a myth. That’s because trends typically shape a particular view of a language. Now that we have Swift, which was supposed to replace Objective C, you have articles saying Objective C is one of the worst programming language to learn. Well, that wasn’t the case years before Swift came out. Unlike movies and video games, programming languages sometimes come in and out of fashion. Some become obsolete while others become mainstays.
In the end, bad programming languages are bad according to your opinion; or, according to another developer who responded to my question, your laziness towards understanding and mastering a new language.