In my experience, things like Ruby are good for putting up quick demos, [...]
This depends greatly on the requirements, the context of the application. The apps on your smart-phone or your chumby or the warehouse-mgr's data tablet will never have more than one user, but then there are performance and size limitations that may dictate the language.
I recall one embedded project where the handheld system had a
generous 32MB of flash storage to hold kernel, apps and data. Some of the software designers couldn't get used to the idea that, even ignoring performance, we weren't going to put 20MB of perl libraries & binaries on the device.
I like many of the ideas behind OOP, especially the encapsulation and the ability to keep the manipulation together with the data.
There is more than a little to like, and the OO concept is not restricted to classic OOP languages. Kernels like BSD or a Linux is very much OO in design despite the fact these are all written in C with some assembler. For example there is an informal 'class' for the MMU interface, where each architecture is a instance written in assembler. The file-systems are a class with numerous instances. Device drivers are a class w/ two inheriting (sub) classes. In some kernels processes are a class with a well defined and variable set of operations. Access control is implemented as a class with several instances. The implementation is in some cases, fairly close to the way an OO compiler would generate code.
It's better to have an HLL to enforce and check, but it's impractical to use a language like C++ because of the extensive run-time support needed. And efficiency ....
But it all breaks down when you need to process 170 million records[...]
Apps in some OO languages can be just as fast and efficient as in a comparable procedural languages, *BUT* it's very difficult for a developer to determine which libraries and methods are efficient and which are not. Also it's somewhat difficult to estimate the performance of in-line code in most HLLs and that's critical for the heavily used bits.
'Efficiency' whether in performance or space or power consumption .. shouldn't become a fetish, but it also is never ignorable and occasionally critical.
Like most disciplines, programming involves trade-offs. Speed of development vs. robustness of the final product. Ease of encapsulation vs. efficiency of algorithms. And so forth.
Exactly, and it's this complex 'tension' between competing requirements (including project schedule, life-cycle, development time, app performance) that makes design interesting. How that tension is resolved is the 'art'.
If you're just writing web apps for small companies, use whatever they want, or whatever you want (if you have the option).
You should always be able to justify the decision against project goals. It seems a lot of developers are undisciplined and substitute their personal biases for objective comparisons. That can ruin a small company.
Most large companies have standards for coding. Some of them are stupid, some of them are bitterly-learned lessons that come from trying to maintain something over a period of years or decades. Your bright newfangled language may be gone, or YOU may be gone, when it comes time to update the application. No one wants to pay those costs.
A stupid coding standard is usually better than no standard.
Amen ! I've earned a lot of money as a consultant un-doing bad design decisions made by undisciplined egotistical developers. 'Cowboy' developers who have since ridden into the sunset - leaving behind a big pile of horse poop, and little or no documentation. We all have favorite tools - the problem occurs when our tool-bias distorts our view of design problems. If you are overestimating the advantages of your pet-tool and underestimating the limitations, then you can't make good decisions.
So my earlier point - learn a lot of languages and understand these in comparison. Read (good) criticisms of your favorites. Be a good skeptic - always doubt and question, especially your own preferences.
I don't see any great advantage to O-O that we didn't get from sensibly modularising our sequential code
Really ? The ability of an OO compiler to manage classes and inheritance and namespaces and all the strong type checking is no advantage ? We do (some of) this in the kernels in C, passing around lists of function pointers and all sorts of pointer indirections - it's massively complex and error prone and humans do all the checking. I think OO compilers provide a huge advantage.
- indeed I am often in dispute with my programmers estimates because I know how long something would have taken me to code 'the old way' and yet the new, 'fantastically re-useable' O-O approach gets estimated (and subsequently takes) nearly double or triple the time to write and then is a bugger to maintain because
Somewhat agree. I don't think the difference in development time is so great as you suggest. Properly written OO strongly disposes the developer to think through all the data types and methods before writing code and that sames time. OTOH the notion that this creates much re-usability seems generally untrue. My experience is that expanding or changing the requirements after the fact usually causes "
refactoring" which is often an OO synonym for throwing your code away and starting over. That's no better and no worse that with procedural languages.
A website that is likely to make you think, and perhaps change some opinions,
http://c2.com/cgi/wiki?RefactoringWithCeePlusPlus
That's the big dilemma these days...the more efficient it is, the less maintainable, and vice-versa.
That's not new - it's fundamental. It's rare that a naive approach to any problem is efficient by any measure except 'design' time. There is usually added efficiency (by some well defined metric) to be had at the cost of added complexity + design time.