Define Better

Someone posted an interesting question on Facebook.

Is using ++i(pre-increment) better than i++(post-increment) ?

My reply was to ask for a definition of “better.” The Facebook posts have a lot of answers from people who appear to not have written a compiler themselves and who making a lot of assumptions. But the point of this post is not to discuss which option is better for some nebulous definition of “better.” Rather is is to think about asking the question of what do we mean by “better?”

There are a lot of what I call religious arguments in computing. Tabs or spaces, curly braces in columns or have the open on the end of a line, what is the best programming language, and on and on. Most of them stem from differing definitions of “better.”

Sometimes things matter very much at one point in time but not at all later and yet the bias from early days remains. There was a time with memory was so tight that a single character variable name was much preferred to a longer more descriptive name because of the space used. Most of my readers probably don’t remember those days but I do. Fortunately I recognized when the change happened. Not everyone did so quickly.  This is possibly at the heart of some of the tabs vs spaces argument for at least some old timers.

Other issues have become moot or mostly moot because we have so much smarter compilers and interpreters these days. We have long been at the point where compilers write better low level code from high level code than even the best assembly language coders. I seriously doubt that there is a difference between the code generated for ++i or i++ as the incrementor in a for loop these days. Compilers are too smart for that. Compiler writers think long and hard about those sorts of things.

How doubly dimensioned arrays are declared used to make a bigger difference than it does today. I can remember thinking long and hard about how to declare and iterate through two dimensioned arrays. You pretty much had to know how the compiler would allocate memory and what cache would look like to get optimal performance. I don’t think many people think about that today except for the most performance critical applications. Applications we don’t give to beginners anyway. Compilers do a lot of optimization on this sort of process so we don’t have to think much about it. Artificial Intelligence and machine learning (see also AI-assisted Programming) are probably going to make compilers even better than they are today.

Today I think “better” in terms of programming should mean “easier for programmers.” Easier to write, easier to understand, easier to modify, and to allow programmers to think about the end result and not the assembly/machine language generated.  Let the software do what software is good at and people do what people are good at.

Leave a Reply

Your email address will not be published. Required fields are marked *