• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

C/C++ vs. Java vs. C#

Hello dasmiller,



may i ask you what compiler you used? Because i never ever had that problem. What matters in the declaration is that the type(s) of the argument(s) match the implementation. The names are completely irrelevant usually. I know that _some_ compilers can generate, if instructed to do so, empty implementations of a function if there is no implementation given in the source, but only the declaration.s
i think what he was saying was along the lines of:

Code:
int secondsfromminutes(int minutes, int seconds);

int secondsfromminutes(int seconds, int minutes)
{
   return (minutes*60) + seconds ;
}
The declaration and implementation assume a different order of variables.

it's an unfortunate side effect of c compatibility. There are plenty more 'gotchas' in there. OTOH, I can't really recall getting caught by many gotchas. Typical industrial strength programming practices (asserts, unit testings, stepping through all new code you write, etc) pretty much catch that stuff. But yes, it's a weakness of the language.
 
Last edited:
Hello Z,



in plain C that would be

A = (rand() % 100) + 1;

to get a value between 1 and 100 (both inclusive). I think your example has a bug, it would give you numbers from 1 to 101 (101 in the case that RND(1) returns 1).

In BASIC - at least the versions I used - the INT(RND) function returned a number from 0.00 to 0.99. Multiplying by 100 gives 0 to 99; hence, you had to add the +1 at the end.

However, in C you also need to seed the random number generator using something like:

srand(time(NULL));

Instead of time(NULL) you can use a fixed value instead, if you need reproducible streams of random numbers upon each startup.

Why did we have to start seeding random number generators, anyway? I don't recall, but I thought at least the Commodore took a random seed of the internal clock... I've never quite understood why we have more complex methods now than we did back in the '70s. Like declaring variables in the first place - Oh, I can see it's useful when you specifically want certain variable types, but we just went ahead back in the day and used whatever variables we wanted without declaring them.

And we walked uphill both ways, through ten miles of snow, sleet, and blistering desert heat to get to the computer store... :D

Edit: in case you wonder, % is the modulo operator. It gives you the remainder of a division. So, 136 % 100 = 36, 593 % 100 = 93, etc...

Fascinating.
 
OTOH, it's easy to make applications in just a few minutes. One thing I can't figure out - what are the usual methods for randomization? I remember in BASIC we had a way to do it that looked something like:

A = INT(RND(1)*100)+1

that would produce a random integer from 1 to 100 - how on earth do we do that now?

Well... back to lurk for me.

In C#
Code:
private int RandomNumber(int min, int max) {
   Random random = new Random();
   return random.Next(min, max); 
}
With that function declared, you would get a random number in a desired range via:

aParticularRandomNumber = RandomNumber(1, 100)
 
Last edited:
Well, I mostly adore C++, for the reasons others criticize it. Bjarne's book on how and why he designed C++ the way he did is very instructive for those who disparage the language...
I think Roger nails it. I programmed C++ for many years and used both Stroustrup's book and annotated reference extensively. Then I took a close look at Java and C#, and that's when I realized what a fantastic job Stroustrup did designing the language. Some of the things in C++ look weird or obscure when you first encounter them, but they are the product of years and years of careful thought, and their virtues are glaringly obvious when you look at another language.

Example: why doesn't C++ have garbage collection? For a couple of very good reasons. Firstly, you don't need it! If you adhere to the creation-is-resource-acquisition and destruction-is-resource-release paradigm, and you use stack-based constructors for resource management, you just can't go wrong. Secondly, with garbage collection you can't control exactly when a resource gets released. Thirdly, garbage collection is just very, very inefficient. Fourthly, garbage collection is complicated by mutual references. And so on. Java and C# designers just didn't think through all the implications, and consequently it is difficult to write inherently efficient and well managed code in either of those languages.

C++ is nearly self-encrypting and encourages a lot of (IMO) bad practices. ETA: Gonna guess that Roger (post #5) has a different perspective on that.
My experience was that bad C++ code was the product of bad programming, not bad language design. C++ is admittedly a very difficult language to master, but well worth the effort. One of the most powerful features of the language is the STL (standard template library), and if you use and extend its patterns you will never see a pointer or even see 'new' and 'delete'. With correct factorization of responsibilities it is possible to write very hierarchical code in which complexity at any given level is close to zero.

That of the four languages you mention, I only like C. C++ (and likewise Objective-C) has two orthogonal feature sets layered on top of one another, procedural and OO, which haven't been integrated together very well. C++ has the misfortune of neither being the simple, clear system-level programming language that C is, or possessing any of the advanced abstractions high level languages provide.
I couldn't disagree more. As pointed out by Roger, the ability to span from low-level to very abstract is C++'s great strength. For example, C# is wonderful if you are creating a GUI. But try to manage data efficiently and you are screwed. .NET did introduce templates in version 3 (a belated recognition of a serious omission), but by comparison to the STL its containers suck.

Regarding Torvalds' dislike of C++, I think he just never read Stroustrup's annotated reference early enough. A lot of the C code written for Linux reads like C++ with a 'this' pointer explicitly coded by hand, and the absence of stack-based destructors makes exception handling a huge pain. It's a great pity C++ was not adopted whole-heartedly, but at the time he was developing Linux C++ was still teething so it was perhaps a justifiable call.

So in answer to the OP, if you are creating a serious application use C++, but make sure all of your programmers are well trained and have read Stroustrup's annotated reference. If you are slapping together a lightweight GUI app with low maintenance requirements I suppose C# is ok, but only because it is so easy to interface to .Net and COM. Java is good for keeping you awake. But at the end of the day, its the quality of the programmer that's most important.
 
i think what he was saying was along the lines of:

Code:
int secondsfromminutes(int minutes, int seconds);

int secondsfromminutes(int seconds, int minutes)
{
   return (minutes*60) + seconds ;
}

The declaration and implementation assume a different order of variables.

And what he is saying is that there's no reason to code it as above. Instead you should code:

Code:
int secondsfromminutes(int , int);

int secondsfromminutes(int seconds, int minutes)
{
   return (minutes*60) + seconds ;
}
 
In C#..

private int RandomNumber(int min, int max) {
Random random = new Random();
return random.Next(min, max);
}

With that function declared, you would get a random number in a desired range via:

aParticularRandomNumber = RandomNumber(1, 100)

... wow.

The technology has advanced, and computers can do more than ever before... but to get a random number, we have to type all that rather than the old way... Shouldn't it get easier, rather than harder?

Of course, I'm still having trouble wrapping my head around using variable names that are actually names (makes debugging a HELL of a lot easier though) rather than just single or double characters....
 
My experience was that bad C++ code was the product of bad programming, not bad language design.

That's been my experience. Bad programmers do bad things wherever the language allows. Good programmers don't.

I don't mean to be flippant or shrug off any responsibility the language design bears towards ensuring good code, however I do feel most of the blame lies with the programmer when things go bad.
 
When you're doing bit-fiddling with binary files, it's got to be C++. Using a language that hides pointers in that situation is like threading a needle wearing boxing gloves.
 
Okay, I'm probably wrong on that. I wrote a OpenGL program in straight C++, and then my boss wanted to see if we could switch to a commercial product that used Java, all kinds of third party libraries, all on top of OpenGL. As you might imagine, they had huge performance issues - I'd pop up and start running in a second or two, they'd take a minute to two minutes. And they were hamstrung, having no way to tune all the compenents they were using. But that is more an issue of using third party libraries, not the language. I stand corrected.
You're not really wrong. There are performance issues, not necessarily to the extent you're describing, but it can still be as much as 20%.

In fact, Microsoft discontinued the .NET classes for Direct3D years ago because the people who care about direct access to D3D tend to also care about performance and can't live with the .NET limitations, while the rest didn't really need it anyway and can just settle for the more abstract interfaces like WPF or XNA.

The technology has advanced, and computers can do more than ever before... but to get a random number, we have to type all that rather than the old way... Shouldn't it get easier, rather than harder?
Well, the auto-completion features of Visual Studio are excellent (and context-sensitive), so you don't actually have to type that much.

When you're doing bit-fiddling with binary files, it's got to be C++. Using a language that hides pointers in that situation is like threading a needle wearing boxing gloves.
If you want to do that kind of stuff though, you still have the option of separating that component out into native code and simply calling it from your .NET/Java application. That's done all the time.
 
Hello dasmiller,

may i ask you what compiler you used? Because i never ever had that problem. What matters in the declaration is that the type(s) of the argument(s) match the implementation. The names are completely irrelevant usually. I know that _some_ compilers can generate, if instructed to do so, empty implementations of a function if there is no implementation given in the source, but only the declaration.

If you really had that bug because of only the names, i would tend to say that this is a severe bug in the compiler instead.

Oh, I think the compiler was working just fine. The types were the same, but by allowing different param names in the header vs. the body, the function looked different to the outside world.

It was many years ago - Visual Studio 1.5 or so. The problem, as I recall, was something like:

buggyfunction.h
function ArcTan2(double x, double y): double

buggyfunction.cpp
function ArcTan2(double y, double x): double
{
// a bunch of code here
}

So the bug was that the routines that called ArcTan2 weren't getting the answer that they expected.

Yes, my bad for switching the names - but if I'm trying to write code that's easy to maintain, why would I ever want the declaration names to be different from the implementation names? I'm intending that as a rhetorical question, but there may be a real answer.

Anyway, since then, I really try to work the parameter order into the function name if I have multiple params with the same type. In a case like that, I'd call it "ArcTan2YX" or some such, although I'm guessing that the standard C++ libraries have a 2-argument arctangent function and of course I'd use that rather than rolling my own.

On a sidenote, you don't need to give any variable names in the declaration at all. Something like "int myfunc(int, int);" should be enough.

When there's only one argument, and it's obvious what the argument should represent (say, Sqrt, for example) then the name isn't terribly important. But if it's possible to guess wrong about what the arguments are, or what order they go in, then there should be names there to help the poor sot who's trying to maintain the code 5 years later. Sure, that's what comments are for, but in my (limited) experience, the guys (it's always the guys . . . ) who take a lot of hard-to-maintain shortcuts are also the ones with really thin comments.

And then they get grumpy when you complain the lack of comments, and they'll give you helpful comments like:

function InitializeThing(double, pointer, double, int, pointer, int, double): void; // initializes a Thing.

Okay, I'm digressing a bit.

YMMV. A lot of people with far more coding experience than I have weighed in on both sides of these things.
 
...
I happen to use C# the most (when I am not stuck with VB). But, perhaps that is mostly due to historic accident. I have always been a Microsoft-platform developer, in my professional career, and I suppose there is little chance of that changing, any time soon.

What do you think?

I hope it stays that way. I like C#, though C and C++ are important for some high-performance applications. You can use pointers in C#, however.
 
When you're doing bit-fiddling with binary files, it's got to be C++. Using a language that hides pointers in that situation is like threading a needle wearing boxing gloves.

That reminds me of a project I did as an assignment in University. I decided to make a compression algorithm of my own.

My goal was to make it compress well, with no concern for the time it takes to compress.

So, I scanned the file tabulating the frequency of all single byte characters.

The compressed file would then be built as follows:

1) Header: 3 byte string "KEN", since I called it kenpression, and my name is Ken.

2) Table: 256 bytes, of values 0-255, each occuring once, ordered from that tabulated as highest frequency, to lowest frequency in the file to compress.

3) Compressed data, with each piece being a variable length piece of data. The first 3 bits defines the length, and the subsequent bits being the data.

Code:
Data Length      Number of Codes                Codes
0                       1                  000
1                       2                  0010 and 0011
2                       4                  01000, 01001, 01010 and 01011
.
.

When I tested it, it seemed to compress most everything down to about 60% of it's original size. This was a short 3 day assignment, and I didn't do any testing on pathological data, or include any pre-compression with a RLE algorithm or anything like that.

My point being, I can't imagine how you would pack those variable bit length pieces of data, and extract them out to 8 bit bytes later, without a language like C/C++ to work with.
 
Last edited:
And what he is saying is that there's no reason to code it as above. Instead you should code:

Code:
int secondsfromminutes(int , int);

int secondsfromminutes(int seconds, int minutes)
{
   return (minutes*60) + seconds ;
}
That changes nothing. If the caller thinks the order is minutes/seconds, and the implementer thinks the order is seconds/minutes, you still have the same problem.

I also very,very strongly disagree with the 'should' in your post. I think that is a terrible practice. For several reasons. With which I will now bore you with :)

Declarations are often all you get to see with commercial code where source code is not provided. Declarations should be extremely readable. To me, readable means sufficient, but no more than necessary comments.

How many times have you bought some commercial library, opened up the .h file, and seen:

int function1 (int, float, int);
int function2 (int, int, int);
int function3(int, ....)

(assume 'function' is replaced with a meaningful name)

How on earth do you use these functions? Who can tell? No comments, no variable names, etc. On the other hand:

Seconds TimeSinceMidnight (int hour, int minutes, int seconds)

is much more readable, and doesn't require much in the way of comments or printed documentation.

Of course I'd suggest using classes rather than built in types. Built in types are likely to cause problems - it's not robust programming by any means. In that case we'd have

Seconds TimeSinceMidnight (HMS thetime);

where HMS is a hours/minutes/seconds class. This example would perhaps be more obvious with dates, where europeans and americans order month and day differently.

I've seen so much code where programmers just write a minimal declaration, then put a very descriptive paragraph long comment in front of the implementation. This is horrible practice, IMO. First, we should be striving for reusable code. I don't mean code that you can plug into another application, I mean realizing that code will be used, and maintained, for decases. Each time you force a programmer to open a .cpp file to figure out how something works, you are increasing complexity. It takes longer to find the function and readability suffers. A robust class often has several related methods. Take the HMS class I alluded to earlier. you can imagine it probably has several functions to extract various values from it (hours, minutes), several other functions to assign values to it, and then more functions to manipulate it (add, subtract, etc). page, page, page, through the cpp file to figure that out, or just look at a concise header file and you'll understand 90% of the class and how to use it. For example (I'm typing this ad hoc, there will be mistakes:
Code:
// ~10 lines of instructions for class usage goes here
class HMS
{
private:
    HMS ();
public:
   // assignment functions
   friend HMS AssignHMS (int hours, int minutes, int seconds);
   friend HMS Seconds (int seconds);
   ... and a few more


   // arithmetic operators
   HMS operator+ (const HMS& time);
   ... etc
};
You can look at that, and pretty much figure out how to use it, all without opening a cpp file. Studies show that if you have to look at more than one screenful of code to understand something, comprehension falls rapidly, and errors multiply.

This is why I hate languages like Java - declaration and implementation all mixed together in one file. Page, page, page, just to try to discover what the class does. Not to mention this style hamstrings serious efforts at making compiling faster. When you have a million lines, compilation time maters.

Plus, the cpp files are where the programming goes on. You should only be looking at that to fix a bug. If you have to read the cpp file just to figure out what a function does, the programmer failed, seriously failed, at commenting or documenting the code. Your refusal to write 5 minutes of comments just cost each programmer 10 minutes (minimum) digging through your code. Ever tried to figure out a modestly sized program (say, 10000 lines) with bad comments, where you have to read code to figure things out? That's several days of work, easy. So, comments and variable naming in headers to explain what is done, and then commments in the cpp file to explain how and why. If you sell your code, and retain your IP, you pretty much have to do it this way anyway; why not remain consistant when the result is so readable and useful anyway?

Anyway, the way C++ does things is well thought out, but it does require discipline and knowledge. Stroustrup expected code to be written as I did it above. Fail to do that and you'll run into the issue complained about. An unfortunate side effect of the C legacy. IIRC, Ada forced the names to be the same - no bugs that way.

Incidentally, I really enjoyed how Ada had you call functions - it made code more readable. I can't remember my Ada anymore, but in C it would look like:
Code:
HANDLE CreateButton (int color, int width, int height);

HANDLE h = CreateButton  (color => red, width=>50, height=>20);

edit: my perception comes from writing industrial strength code where lives are at stake. First cancer research, then aircraft flight computers, then flight planners and 3D black box playback software, and now autonomous weapons systems. You don't make mistakes in this kind of code, and you write defensively and robustly, fully expecting the code to be around for decades. I still use every day code I wrote 15 years ago, in completely different applications. (things like great circle math, speed/distance/time calculations, that kind of jazz). Once you get used to programming that way, you program everything that way, even through away GUI applications. With some obvious exceptions, what works for difficult problems works for easy ones, for the same reasons. I'll admit I'll through the occasional int around in some quick and dirty stuff I write, but a lot of that is forced anyway when you work with the Windows API, where WORD and stuff is the order of the day. Anything not GUI tends to eschew built in types.
 
Last edited:
In C#
Code:
private int RandomNumber(int min, int max) {
   Random random = new Random();
   return random.Next(min, max); 
}
With that function declared, you would get a random number in a desired range via:

aParticularRandomNumber = RandomNumber(1, 100)

Just one correction needed, (which is something that always bugged me): The first parameter of the Next function is inclusive, and the second one is exclusive. That means your example will only give you a random number from 1 to 99.

You can correct for that, by pre-incrementing the second parameter:

Code:
private int RandomNumber(int min, int max) {
   Random random = new Random();
   return random.Next(min, ++max); 
}

(Note: the ++ in front of the max.)

You can also enhance the randomness by initializing the random object with clock ticks, like this:

Random random = new Random((int)DateTime.Now.Ticks & int.MaxValue);
 
Today, I am inclined to think that ASP.NET 1.1 was rather ghastly. Have you seen the newer versions? They are substantially less ghastly, since the advent of partial classes, Master pages, generics, improved @page properties, and other things.

Yes, I'm talking 2.0 and/or 3.5 here. It's not that there's anything glaringly and obviously wrong with the overall design, it's just that as soon as I get into the details I always get caught out by some random and unexplainable misfeature.

For instance, try adding optgroup tags to asp DropDownList and tell me how you get on. Or adding classes to radio buttons and/o checkboxes without having it wrapped in a span you never asked for or wanted, with the class stuck to the span. Or having an asp: RadioButtonList not generate a table element that's going to make any screen reader choke. Or making an asp:LinkButton work without Javascript. Or, for complete hilarity, put asp:RadioButtons inside a Repeater, and try to make them all part of the same group.

Unfortunately, web accessibility is my bread and butter, and as a framework, the sheer amount of things which simply don't work make ASP.NET so completely unreliable it's a struggle every day just to make websites comply with the Disability Discrimination Act to the client's satisfaction. It has neither the power and control-freakery that I get from using naked Perl or PHP, nor the convenience of using a modern web app framework such as Ruby On Rails.
 
Last edited:
Example: why doesn't C++ have garbage collection? For a couple of very good reasons. Firstly, you don't need it! If you adhere to the creation-is-resource-acquisition and destruction-is-resource-release paradigm, and you use stack-based constructors for resource management, you just can't go wrong. Secondly, with garbage collection you can't control exactly when a resource gets released. Thirdly, garbage collection is just very, very inefficient. Fourthly, garbage collection is complicated by mutual references. And so on. Java and C# designers just didn't think through all the implications, and consequently it is difficult to write inherently efficient and well managed code in either of those languages.

Garbage collection is a trade-off. If your app is constantly creating & destroying thousands of little objects, then C#'s garbage collection could certainly cause problems. If you've got a more stable population of objects, then garbage collection can help you avoid some messy bugs. True, if you'd coded it up properly, the bugs wouldn't be there in the first place, but the garbage collector is less likely to make those sorts of mistakes than I am.

So I'd argue that garbage collection doesn't make it more difficult to have well-managed code (mutual references can be a real headache with non-garbage-collected languages, too). But I certainly agree on the efficiency part.

As for whether the C# and Java designers didn't think through the implications of garbage collection, C# was architected by Anders Hejlsberg, and he was very familiar with the Java practice when he started on C#.

My experience was that bad C++ code was the product of bad programming, not bad language design.

Well, certainly, bad code is a result of bad programming, and no language is so good that it will keep a bad programmer from making bad code. But there's something to be said for a language that nudges a bad programmer to do things less badly, or tolerates some poor code. And even good programmers have bad days.

But at the end of the day, its the quality of the programmer that's most important.

Absolutely. A great programmer with a mediocre language & environment is much more productive than an average programmer with a great language & environment.

And enormously more productive than a huge team of average programmers with a few mediocre team leads working to a schedule developed by a marketing group.
 
In BASIC - at least the versions I used - the INT(RND) function returned a number from 0.00 to 0.99. Multiplying by 100 gives 0 to 99; hence, you had to add the +1 at the end.

Ahh, didn't know that. My Basic experience is obviously older than the data retention time of my brain :D

Why did we have to start seeding random number generators, anyway?

Well, "back then" we didn't care much about encryption, key-exchange, challenge-response methods, and the like. Also, we did not do much video or image editing on computers anyway.

Imagine you want to contact another machine somewhat securely. So you use an encrypted channel. You also use random numbers in the encryption algorithm. Now, both sides have to start at the same place in the random sequence to make it work. So, you seed the RNG first.

Another example: Imagine you want to render noise or something else depending on random numbers into video footage. You have 10 machines that can render different parts of the frame each, all at the same time. Here, again, you want to have all machines have the same random number sequence, so you seed it.

Greetings,

Chris
 
Why did we have to start seeding random number generators, anyway?

I've found it very useful when testing/debugging. Suppose you're testing a game, and it crashes after a minute or two. There may have been thousands of random number calls that got it to that point, and if you run the code again, it's unlikely to ever get to exactly the same condition.

But suppose that you had seeded the random number generator with, say, 14. Now, if you run the code again and seed with 14 again, it will go through exactly the same sequence of random numbers and go to exactly the same point that crashed before.

ETA: what Chris Klippel said.
 
Last edited:

Back
Top Bottom