Hacks, confusions, and sillinesses

Paul C. Anagnostopoulos

Nap, interrupted.
Joined
Aug 3, 2001
Messages
19,141
Okay, this is a thread for describing hacks, confusions, sillinesses, stupidities, and downright disturbing nonsense in the programming business. I'll start.

I can never remember to use the new operator in object oriented languages. Why is it necessary? When the class name is used as method call, it's obviously a call to a constructor. What's the operator for? Is there some syntactic confusion I'm overlooking?

~~ Paul

 
That is an absolutely zany, nonsensical, absurd belief, Jim.

So, what's the collective name for $, @, and %? Why aren't they just unary operators?

~~ Paul
 
That is an absolutely zany, nonsensical, absurd belief, Jim.

So, what's the collective name for $, @, and %? Why aren't they just unary operators?

~~ Paul
I worked with a programmer who thought it was called 'the turnary operator' because it 'turned into either one of the two choices'!
 
Well, I'd like to go with the turnary operator, but they aren't operators. If they were, you could write $(subexpression). Sad, ain't it?

~~ Paul
 
Well, I'd like to go with the turnary operator, but they aren't operators. If they were, you could write $(subexpression). Sad, ain't it?

~~ Paul
Oh yeah I was referring to ?:, the so-called ternary operator.

I used to really like arrays of pointers to functions and things like that in C. C++ just took all the fun out of programming.
 
Okay, this is a thread for describing hacks, confusions, sillinesses, stupidities, and downright disturbing nonsense in the programming business. I'll start.

I can never remember to use the new operator in object oriented languages. Why is it necessary? When the class name is used as method call, it's obviously a call to a constructor. What's the operator for? Is there some syntactic confusion I'm overlooking?

~~ Paul


I remember trying to learn Smalltalk, and finding the '=' operator was confusing in that respect.

OO languages, with methods that do something, but there is ambiguity about what is the object.

Eg, Do I print.char('a') or char('a').print?
 
Objects were confusing as hell for me until I learned Assembly and figured out that objects are just memory locations. Is the "object" idea supposed to simplify that ?
 
Sorry, I still think that non-OO always made MUCH more sense to me than having objects that do stuff AND carry data about AND magically inherit stuff. And there's no standard OO database either. OO is actually rather clumsy and prone to generating grossly overbloated code that does sweet FA most of the time, if you ask me.

The way I write it, I create a single object called a "program", which has all the variables and procedures I need created within it, and it does exactly what I want it to do; no more, no less. Call me crazy, but it sure makes my programming life a LOT easier.

Then again, I'm dyslexic! ;)
 
While a procedural programmer myself (in fact this week I fine tuned an MVS assembler routine) and use RDBMS's as my preferred data store, I still recognise the strength of OO methodologies for certain areas. Some of the projects I've worked on in the past would have been much harder work with an OO approach. It solves a number of data-driven modelling problems though it does introduce its own problems. Like most of these issues it's a judgement call which outweighs the other.
 
From a friend:Not all object oriented languages are created alike. There's no particular syntactic confusion if you define the language in such a way that a class name call is a constructor call - but you may as well say that get/set methods in Java are unnecessary ("there's no syntactic confusion"...). The reality is that the new operator is as syntactically required as the braces are, and for exactly the same reason - it's convention. If it bothers you that much, check out lexx and yacc. I'm sure it's possible to use them to generate a language compiler that would be syntactically exactly like the object oriented language that is causing you grief, excepting that the new is no longer required. Or switch to Perl, where the constructor can be called whatever you want it to be called.
 
I used to really like arrays of pointers to functions and things like that in C. C++ just took all the fun out of programming.

Besides the fact there's nothing stopping you doing that in C++ just like you did in C you can acheive the same thing using what is known as a delegate pattern. Essentially you have one object that calls another object's function to perform its function. The client only ever sees the first object but the actual code that is executed will vary according to the actual object that is used in the delegate. This is where sub-typing comes in useful because you only need define a virtual/abstract class as the class that implements the delegate function and allow the subtypes to actually execute it.

Is the "object" idea supposed to simplify that ?

The major OO concepts are inheretance/polymorphism and data encapsulation.

In assembler when you get an object reference what it really does is give you a context for method calls. It's like a struct with functions.

OO is actually rather clumsy and prone to generating grossly overbloated code that does sweet FA most of the time, if you ask me.

That's only true if you aren't really using OO in the right way. I rarely right classes that are more than 1000 lines of code - most are 100 or less.

There are some things that are easy to do in OO that are ridiculously complex in a procedural language (which by what you descirbe you are used to).
 
In assembler when you get an object reference what it really does is give you a context for method calls. It's like a struct with functions.

When you think assembly, several things sound strange, like what is a "struct with functions" ? I guess you mean a struct with pointers to function-entry points. Still, I'd rather see the compiler output, it'd make more sense to me. Why say "inheritance" and don't just copy an array of variables to a new memory location and call this "a new object"... I guess I'm hopelessly addicted to asm, all I see is registers and memory locations. But there isn't anything else, is it ?

There are some things that are easy to do in OO that are ridiculously complex in a procedural language (which by what you descirbe you are used to).

No argument from me here, I'm just an amateur. But I'd like to see a small example, like the same stuff accomplished procedurally and object-orientedly.
 
That's only true if you aren't really using OO in the right way. I rarely right classes that are more than 1000 lines of code - most are 100 or less.
I have written many hundreds of whole programs in less than 100 lines of code. 1000 lines would be considered a huge procedural program, even in COBOL! Many of my procedural programs have compiled to a under a hundred kbytes of executable, with no need to drag around great lumps of library code with them to work either. And they did precisely and only the job they were designed to do, VERY quickly. Clever programmers could do far more than I could, in far less.

While I can and do understand how OO works and the advantages it offers (I use it myself), to me as an olde-tyme programmer with over 30 years' experience, OO stands mostly for 'Orribly Optimised! ;)

To be honest, I think young programmers should be taught how to write well-constructed and optimal-design code in a highly limited environment BEFORE they graduate to OO. They should learn to appreciate how big a byte really is, what can actually be done with very limited functionality. They should be encouraged to keep these limitations in mind, because they continue to exist on even the most modern computers...

It's all very fine to say "hardware is cheap", but it still isn't free... And given that much of the life of most modern OS's goes to dealing with bloatware produced by programmers who simply do not understand what their code actually does to memory and hardware (and many of these are current commercial programs!), imagine how much more could be achieved by simply optimising their software with the above performance goals in mind without changing up the hardware... Why, you could get the same features much faster and in much smaller memory consumption. Which is better, no?

OK I'll go and oil my Zimmer frame now, get my cardigan on, get a nice cup of tea, and sit by the fire... Thank you for listening, young man.
 
Well said Zep. Given the overhead of some JVMs etc etc teaching kids those kind of performance issues might reduce code bloat a bit.
And remember, old programmers never die, they're just buried cut-edge left.
 
It's all very fine to say "hardware is cheap", but it still isn't free... And given that much of the life of most modern OS's goes to dealing with bloatware produced by programmers who simply do not understand what their code actually does to memory and hardware (and many of these are current commercial programs!), imagine how much more could be achieved by simply optimising their software with the above performance goals in mind without changing up the hardware... Why, you could get the same features much faster and in much smaller memory consumption. Which is better, no?

Actually there is a rule which says that "the rate at which software bloats will always be equal or greater to the rate at which processor power increases". It's amazing how many programmers manage to devour every single bit of available computing power even when all they have to do is a "Hello World!"
 

Back
Top Bottom