Ray Kurzweil gives a pretty good argument in "The Age of Intelligent Machines" that because of moore's law and advancements in AI and neurology, this will happen in 20-30 years.
I read quite a lot about Ray K., but I have the impression he is kind of a dreamer.
His theory of the singularity is not really backed by any data, and he makes lots of assumptions
In the case of some human brains, it happened around 1960.
Well... you have to be VERY careful with what you mean by "simulate". If you're talking about duplicating the processes that make up human self-awareness and intelligence
Yes
- it may not happen for a very long time, if ever.
Why?
People talk about mapping out the brain and determining where thoughts, feelings and so forth come from - and on a macro basis, they may be able to do that, identify regions of the brain that are most active while thinking or feeling, etc. However, that's an incredibly far cry from understanding the detailed interactions in those regions that produce the result - let alone the combination of processes that produces self-awareness, the ability to imagine and/or create, and so forth. In example, we can't even hook up visual technology to the human brain at the same level of quality our eyes provide. (At least, at the moment.) And that's just input to the brain and intelligence residing in there. The processes that make "you" exist are much more complex and at this time, a mystery. (Yes, inroads are being made, but very slowly - and at the moment, there's more speculation than understanding.

)
Until the precise mechanisms of each phenomenon are fully understood... why not? How could you accurately distinguish the two?
Basically, it is all made by neurons and synapses, right?
And, down the line, it is all about interactions of atoms and molecules.
The basic laws of those interactions have been found long ago.
So, where is the big conceptual obstacle, here?
But it may be possible to create artificial intelligence who's outward appearance is indistinguishable from human - even if the processes producing that appearance are utterly different from the ones employed in our intelligence.
Agreed with that.
Mine was a general statement, you can consider the human brain as a "black box" and just try to simulate it, looking at inputs and outputs.
Seems it is one thing to build a supercomputer capable of simulating the human brain, and a whole other issue programing it to actually simulate the human brain.
I agree with this. Is simulating the behaviour the same as the actual behaviour itself?
I do not get why such a difference, anyway..
There are significant differences between neurons and multi-neuron structures and anything in the silicon world -- thus a straight up comparison between transistors and neurons or synapses doesn't work.
Add to that the facts that many additional factors go into synaptic communication between neurons AND neurons function both computationally and as data storage, and you begin to see the problems.
The field of A.I. mostly concerns itself with finding ways to do things using the current silicon computing paradigm, that is "cpu + memory." I could be wrong, but I think the real simulation work (I.E. neural networks) has been all but abandoned for the time being because researchers realize that trying to simulate neurons with a cpu + memory is extremely inefficient and wasteful.
Thus, the first A.I. that can, for instance, pass a good Turing test, will be something that arises out of very non-human thought mechanisms. And, as a programmer who works on the cutting edge of computing technology, I would second the numbers put forth by DrBaltar -- expect to see such a thing in under 30 years. This is a very exciting time to be alive!
I remember having read quite something about Neural Networks about 20 years ago, then nothing.
I agree that the comparison between transistor and synapses is arbitrary and (partly) flawed, but, again, in order to build an artificial brain, you do not have really to copy it.
Just think, in order to do what the horse does, men have built the car, which is more efficient than horses (and eats lots less grass), but a car is in no way similar to a horse.
If you look at computer chess, modern algorithms do not work as human thinking will do, still they are far more efficient, and a modern chess computer, can beat almost all human players.
I don't think it will take very long at all. Already A.I. designers are seeing their programs do very odd things. Fans of the old game, "Black & White" used to have websites put up where they posted about the quirky things the pet's AI made them do. One of the designers had stated (Reference not handy, if demanded I will look for it) that "One time I was picking up fences and placing them to keep a herd of sheep confined. My creature was unhappy my attention was diverted elsewhere, so it kicked in the fence and killed every single sheep." Also I'd read where two of the game engineers were running a LAN game to see how the creatures would interact from each player. Looking for bugs and the like, QC, I imagine. The two creatures had started to play catch with a rock. A person walked in front of the one creature, and it stopped to look at it. It then missed catching the rock, which bounced off of its head. It immediately started pulling trees out of the ground, and tossing them on the rock. The developers were confused, it wasn't programmed behaviour. Then the creature casted a "fireball" miracle on the trees, and made a nice little bonfire. It stood and watched for a bit, and then walked right into it and picked up the now-glowing-red-hot rock. The creature caught fire, but it turned around and threw the rock at the other creature, who had been patiently waiting to resume their game. It caught the red-hot rock, and immediately caught fire. The motive appeared to be revenge.
Creepy.
You may want to read up on the criticisms of AI research - the same 2-3 decades predictions have been made since the 1950s.
It seems that AI, to become real, is taking a lot more than predicted.
I have read wuite a few books published decades ago, which were claiming that the AI revolution was near.
Still, BlueGene can not be programmed to recognize the difference between a bottle of wine and a bottle of water
More here:
In 1957 Herbert Simon said that within 10 years, a digital computer would be the world's chess champion.
http://www.geocities.com/SiliconValley/Lab/7378/comphis.htm
It took 40-50 years, not 10 years.