Programming Trends to Follow?

I don't want to say too much about SAP, because I don't have that much experience with it. However, it does always seem to be connected to a ridiculously over-complicated operation, in my limited experience.

Dell tried to implement SAP a while back. After spending millions of dollars and thousands of man-hours, they abandoned the project as unworkable.

This sort of thing is what keeps ERP consultants in business.

My only experience with SAP is when I'm trying to work with some sort of system and I'll remark "this is a hunk of crap, what is it written in?", the answer is invariably "SAP".
 
In my experience, things like Ruby are good for putting up quick demos, [...]

This depends greatly on the requirements, the context of the application. The apps on your smart-phone or your chumby or the warehouse-mgr's data tablet will never have more than one user, but then there are performance and size limitations that may dictate the language.

I recall one embedded project where the handheld system had a generous 32MB of flash storage to hold kernel, apps and data. Some of the software designers couldn't get used to the idea that, even ignoring performance, we weren't going to put 20MB of perl libraries & binaries on the device.

I like many of the ideas behind OOP, especially the encapsulation and the ability to keep the manipulation together with the data.

There is more than a little to like, and the OO concept is not restricted to classic OOP languages. Kernels like BSD or a Linux is very much OO in design despite the fact these are all written in C with some assembler. For example there is an informal 'class' for the MMU interface, where each architecture is a instance written in assembler. The file-systems are a class with numerous instances. Device drivers are a class w/ two inheriting (sub) classes. In some kernels processes are a class with a well defined and variable set of operations. Access control is implemented as a class with several instances. The implementation is in some cases, fairly close to the way an OO compiler would generate code.

It's better to have an HLL to enforce and check, but it's impractical to use a language like C++ because of the extensive run-time support needed. And efficiency ....

But it all breaks down when you need to process 170 million records[...]

Apps in some OO languages can be just as fast and efficient as in a comparable procedural languages, *BUT* it's very difficult for a developer to determine which libraries and methods are efficient and which are not. Also it's somewhat difficult to estimate the performance of in-line code in most HLLs and that's critical for the heavily used bits.

'Efficiency' whether in performance or space or power consumption .. shouldn't become a fetish, but it also is never ignorable and occasionally critical.

Like most disciplines, programming involves trade-offs. Speed of development vs. robustness of the final product. Ease of encapsulation vs. efficiency of algorithms. And so forth.

Exactly, and it's this complex 'tension' between competing requirements (including project schedule, life-cycle, development time, app performance) that makes design interesting. How that tension is resolved is the 'art'.

If you're just writing web apps for small companies, use whatever they want, or whatever you want (if you have the option).

You should always be able to justify the decision against project goals. It seems a lot of developers are undisciplined and substitute their personal biases for objective comparisons. That can ruin a small company.

Most large companies have standards for coding. Some of them are stupid, some of them are bitterly-learned lessons that come from trying to maintain something over a period of years or decades. Your bright newfangled language may be gone, or YOU may be gone, when it comes time to update the application. No one wants to pay those costs.

A stupid coding standard is usually better than no standard.

Amen ! I've earned a lot of money as a consultant un-doing bad design decisions made by undisciplined egotistical developers. 'Cowboy' developers who have since ridden into the sunset - leaving behind a big pile of horse poop, and little or no documentation. We all have favorite tools - the problem occurs when our tool-bias distorts our view of design problems. If you are overestimating the advantages of your pet-tool and underestimating the limitations, then you can't make good decisions.

So my earlier point - learn a lot of languages and understand these in comparison. Read (good) criticisms of your favorites. Be a good skeptic - always doubt and question, especially your own preferences.


I don't see any great advantage to O-O that we didn't get from sensibly modularising our sequential code

Really ? The ability of an OO compiler to manage classes and inheritance and namespaces and all the strong type checking is no advantage ? We do (some of) this in the kernels in C, passing around lists of function pointers and all sorts of pointer indirections - it's massively complex and error prone and humans do all the checking. I think OO compilers provide a huge advantage.

- indeed I am often in dispute with my programmers estimates because I know how long something would have taken me to code 'the old way' and yet the new, 'fantastically re-useable' O-O approach gets estimated (and subsequently takes) nearly double or triple the time to write and then is a bugger to maintain because

Somewhat agree. I don't think the difference in development time is so great as you suggest. Properly written OO strongly disposes the developer to think through all the data types and methods before writing code and that sames time. OTOH the notion that this creates much re-usability seems generally untrue. My experience is that expanding or changing the requirements after the fact usually causes "refactoring" which is often an OO synonym for throwing your code away and starting over. That's no better and no worse that with procedural languages.

A website that is likely to make you think, and perhaps change some opinions,
http://c2.com/cgi/wiki?RefactoringWithCeePlusPlus


That's the big dilemma these days...the more efficient it is, the less maintainable, and vice-versa.

That's not new - it's fundamental. It's rare that a naive approach to any problem is efficient by any measure except 'design' time. There is usually added efficiency (by some well defined metric) to be had at the cost of added complexity + design time.
 
Last edited:
....
Really ? The ability of an OO compiler to manage classes and inheritance and namespaces and all the strong type checking is no advantage ? We do (some of) this in the kernels in C, passing around lists of function pointers and all sorts of pointer indirections - it's massively complex and error prone and humans do all the checking. I think OO compilers provide a huge advantage.

We don't use much C, mostly java. In my experience, my programmers struggle to keep track of the object library (at least more than I used to struggle to know what module to call to eg calculate dates) because stuff is split down into such small chunks.

...

Somewhat agree. I don't think the difference in development time is so great as you suggest. Properly written OO strongly disposes the developer to think through all the data types and methods before writing code and that sames time. OTOH the notion that this creates much re-usability seems generally untrue. My experience is that expanding or changing the requirements after the fact usually causes "refactoring" which is often an OO synonym for throwing your code away and starting over. That's no better and no worse that with procedural languages.

Yeah, I'm probably exaggerating a bit but I'm not talking about estimating the whole project, just one component - possibly even a simple problem fix that, in my 'procedural' head I can work a simple 'one line fix' for. Oh and yeah, I hear the 'refactoring' word a lot!

...
A website that is likely to make you think, and perhaps change some opinions,
http://c2.com/cgi/wiki?RefactoringWithCeePlusPlus

Thanks for the link. I haven't done any C++ for about 10 years though!;)
 
OOP greatly reduces cyclomatic complexity. I never use switch statements anymore, and rarely use if statements.
 
SAP was going to use Java as the future direction for all it's application development. They have quietly dropped it, and are moving applications from Java back to their own in house programming language of ABAP. ABAP is a derivative of COBOL. Not as theoretically correct as JAVA, but a hell of a lot faster, and less memory intensive. I saw a JAVA part of it use 48GB of memory the other day, to process one single transaction of data. You can't do that in a commercial environment, even today.

I'd never heard of ABAP before.

It's my policy never to make snap judgements based on superficial impressions, but...I refuse to use a programming language that sounds like the first thing I'd say after a debilitating stroke.
 
My experience is that expanding or changing the requirements after the fact usually causes "refactoring" which is often an OO synonym for throwing your code away and starting over.

The goal of refactoring is to improve the design of code without altering its behavior. By definition that means tossing out the bad stuff and replacing it with something better. Occasionally it may mean starting over on certain components, but if this happens frequently you've got problems.
 
The goal of refactoring is to improve the design of code without altering its behavior. By definition that means tossing out the bad stuff and replacing it with something better. Occasionally it may mean starting over on certain components, but if this happens frequently you've got problems.

Why would constant refactoring mean that you've got problems? Every developer knows that requirements change and new features are always added. With every change there is always technical debt that's introduced. Eventually you need to repay that debt by refactoring sections of the system. I would argue that if you're not refactoring enough, your system is not changing enough, and your company is not innovating enough.
 
Or pick the one that's paying you now. For me, that's C++, MFC, and Java.
 
I'd never heard of ABAP before.

It's my policy never to make snap judgements based on superficial impressions, but...I refuse to use a programming language that sounds like the first thing I'd say after a debilitating stroke.

There is plenty of work and it pays well.

http://www.seek.com.au/JobSearch?DateRange=31&Keywords=abap&nation=3000&SearchFrom=quick

It is not at all theoretically sound, but it is designed to easily interface to relational databases, and has even hacked on an OO component.
 
aggle-rithm said:
OOP greatly reduces cyclomatic complexity. I never use switch statements anymore, and rarely use if statements.
And yet the complexity of those conditions must be realized somewhere. Is it more perspicuous to have them buried in the underlying structure of the language?

~~ Paul
 
When I first started working as a developer, Rapid Application Development (RAD) was the biggest thing since sliced bread; today, it is a stupid idea that we wasted too much time on ten years ago.

I've made a very lucrative career out of Rapid Application Development. It's an extremely beneficial strategy when you have clients who know they need some kind of software solution, but are unable to articulate exactly what it's supposed to do. Clients like these want someone to show them what they need, and this is most easily accomplished when you can provide them with functioning prototypes quickly and with a minimum of upfront input.

I'm not going to pretend I have a clue as to what software development will look like in ten years, but I would be very surprised if there were no room for the kind of quick turnaround services I provide.
 
And yet the complexity of those conditions must be realized somewhere. Is it more perspicuous to have them buried in the underlying structure of the language?

~~ Paul

The goal of reducing cyclic complexity is to make the source code easier to understand...perhaps at the expense of performance.

It's like the old fish-or-cut-bait dilemma. The more fast and efficient code is, the more likely it will be difficult to maintain. The easier it is to maintain, the less efficient it will be. (In general. I know there are exceptions.)

Performance usually isn't an issue with the software I work with, since most of the execution time is spent waiting for database access, not in crunching data.
 
aggle-rithm said:
The goal of reducing cyclic complexity is to make the source code easier to understand...perhaps at the expense of performance.
Right, but if the complexity is simply buried in the semantics of the language, rather than programmed explicitly, have we gained anything?

I don't know, just askin'.

~~ Paul
 
Right, but if the complexity is simply buried in the semantics of the language, rather than programmed explicitly, have we gained anything?

I don't know, just askin'.

~~ Paul

I see what you're saying...

Yes, because the execution paths you're not interested in are hidden inside another class, rather than cluttering up the page.
 
figure 85% of all business apps are in COBOL. You get money out of an ATM, you're probably interfacing with a CICS backend and dealing with COBOL. Your monthly statements and the batch cycles behind them - COBOL. Between 30 and 40,000 mainframes worldwide according to what I've been able to find and most of those are being used by businesses.
 
COBOL programmers are dying off and retiring and there are very few university programs that teach it. There's still a market but it's recovering slowly. Meanwhile I entertain myself by learning JAVA.
 

Back
Top Bottom