• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

How long will it take for computers to simulate the human brain?

Joined
Dec 6, 2004
Messages
4,561
So, in the last few years, Moore`s law has worked pretty well.
We have now USD150 microprocessors (quadcores) which have almost 1 billion transistors.
Most powerful supercomputers use tens of thousands chips.
If we compare the number of transistors in a soon-to-come supercomputer with the neurons in a typical person`s brain(*), we have about 10trillions transistors against 100billions neurons (transistors win by 100:1).
If we compare the number of transistors in a soon-to-come supercomputer with the synapses in a typical person`s brain(*), we have about 10trillions transistors against 100trillion synapses (synapses win by 10:1, in five years, transistors will win)

So, why the most complex supercomputer can not recognize a bottle of water from a can of beer (something that the lowest IQ man on Earth can do with no problems)?

How long will it take for computers to simulate the brain?

(*)
yes, the comparison between number of transistors and the number of neurons and synapses is arbitrary
 
Last edited:
Well... you have to be VERY careful with what you mean by "simulate". If you're talking about duplicating the processes that make up human self-awareness and intelligence - it may not happen for a very long time, if ever.

People talk about mapping out the brain and determining where thoughts, feelings and so forth come from - and on a macro basis, they may be able to do that, identify regions of the brain that are most active while thinking or feeling, etc. However, that's an incredibly far cry from understanding the detailed interactions in those regions that produce the result - let alone the combination of processes that produces self-awareness, the ability to imagine and/or create, and so forth. In example, we can't even hook up visual technology to the human brain at the same level of quality our eyes provide. (At least, at the moment.) And that's just input to the brain and intelligence residing in there. The processes that make "you" exist are much more complex and at this time, a mystery. (Yes, inroads are being made, but very slowly - and at the moment, there's more speculation than understanding. :))

But it may be possible to create artificial intelligence who's outward appearance is indistinguishable from human - even if the processes producing that appearance are utterly different from the ones employed in our intelligence.
 
Last edited:
Seems it is one thing to build a supercomputer capable of simulating the human brain, and a whole other issue programing it to actually simulate the human brain.

“If the human brain were simple enough for us to understand, we would be too simple to understand it”
 
Ray Kurzweil gives a pretty good argument in "The Age of Intelligent Machines" that because of moore's law and advancements in AI and neurology, this will happen in 20-30 years.
 
Unfortunately I don't see it happening in the next century or so. And by this I don't mean it will happen in 2 or 3 centuries. We are still a very long way from completely understanding all the details of the brain operation and so we can't even predict a date with any degree of certainty.
 
Last edited:
Unfortunately I don't see it happening in the next century or so. And by this I don't mean it will happen in 2 or 3 centuries. We are still a very long way from completely understanding all the details of the brain operation and so we can't even predict a date with any degree of certainty.

I agree with this. Is simulating the behaviour the same as the actual behaviour itself?
 
I agree with this. Is simulating the behaviour the same as the actual behaviour itself?

Until the precise mechanisms of each phenomenon are fully understood... why not? How could you accurately distinguish the two?
 
Last edited:
There are significant differences between neurons and multi-neuron structures and anything in the silicon world -- thus a straight up comparison between transistors and neurons or synapses doesn't work.

Add to that the facts that many additional factors go into synaptic communication between neurons AND neurons function both computationally and as data storage, and you begin to see the problems.

The field of A.I. mostly concerns itself with finding ways to do things using the current silicon computing paradigm, that is "cpu + memory." I could be wrong, but I think the real simulation work (I.E. neural networks) has been all but abandoned for the time being because researchers realize that trying to simulate neurons with a cpu + memory is extremely inefficient and wasteful.

Thus, the first A.I. that can, for instance, pass a good Turing test, will be something that arises out of very non-human thought mechanisms. And, as a programmer who works on the cutting edge of computing technology, I would second the numbers put forth by DrBaltar -- expect to see such a thing in under 30 years. This is a very exciting time to be alive!
 
Last edited:
I don't think it will take very long at all. Already A.I. designers are seeing their programs do very odd things. Fans of the old game, "Black & White" used to have websites put up where they posted about the quirky things the pet's AI made them do. One of the designers had stated (Reference not handy, if demanded I will look for it) that "One time I was picking up fences and placing them to keep a herd of sheep confined. My creature was unhappy my attention was diverted elsewhere, so it kicked in the fence and killed every single sheep." Also I'd read where two of the game engineers were running a LAN game to see how the creatures would interact from each player. Looking for bugs and the like, QC, I imagine. The two creatures had started to play catch with a rock. A person walked in front of the one creature, and it stopped to look at it. It then missed catching the rock, which bounced off of its head. It immediately started pulling trees out of the ground, and tossing them on the rock. The developers were confused, it wasn't programmed behaviour. Then the creature casted a "fireball" miracle on the trees, and made a nice little bonfire. It stood and watched for a bit, and then walked right into it and picked up the now-glowing-red-hot rock. The creature caught fire, but it turned around and threw the rock at the other creature, who had been patiently waiting to resume their game. It caught the red-hot rock, and immediately caught fire. The motive appeared to be revenge.

Creepy.
 
Last edited:
That works with pretty much everything, though, Slyjoe. I.E. flying cars, rocket belts, and sentient AI fits pretty good as well. I think the only way we'll know if it is possible is when it happens.
 
Actually, if the brain is a biological computer, a simulation of the brain would be called an emulator. An emulator simulates a cpu or os, and software native to the simulated platform could then be run on the emulator. Once an emulator of the brain is produced, then the initial condition of the neural net state would be the 'software' run on the emulated brain.

So the question is, can a neuron be simulated to a high enough degree on the computer, and then can the structure of the brain and the interconnections between the neurons be accurately modeled on the computer? That would require a very high resolution brain scan.

Or... perhaps the best way to go about it is once we are able to simulate cell division and function based on DNA, to simulate the growth of a fetus, which would grow the brain for you.

I'm not sure which route is easier.
 
You could bypass both and just write a breeder AI program, with direction toward humanlike behavior, and let it run over a few million generations. You'd need a massive machine for that, though.
 
That works with pretty much everything, though, Slyjoe. I.E. flying cars, rocket belts, and sentient AI fits pretty good as well. I think the only way we'll know if it is possible is when it happens.

Well, flying cars and rocket belts are possible. We have demonstrated those. Sentient AI, no.

So why don't we all have flying cars and rocket belts? It is a large variety of reasons, many sociological. Similar to video telephony. The technology isn't the stumbling block as much as the social acceptance.

With AI I think there is a fundamental, different problem. We understand the physics behind other technological inventions. Not so much trying to emulate the physics of biology.

I see it as similar to predicting rockets in 2-3 decades at the time of Newton. And we don't even have the physics down yet regarding sentience.

Interesting stuff though. :)
 
You may want to read up on the criticisms of AI research - the same 2-3 decades predictions have been made since the 1950s. :)

On the other hand, computing has consistently outpaced the predictions made for it, so in some ways A.I. has actually advanced faster than predicted.

What has NOT happened, that people thought would, is something that can pass a good Turing test.
 
On the other hand, computing has consistently outpaced the predictions made for it, so in some ways A.I. has actually advanced faster than predicted.

What has NOT happened, that people thought would, is something that can pass a good Turing test.

Would it be fair to say that it has advanced neither faster nor slower, but differently than imagined? I think it's similar to the 'paperless office' story. It has more to do with the initial framing of the problem than the accuracy of predictions (on both sides).
 
Would it be fair to say that it has advanced neither faster nor slower, but differently than imagined? I think it's similar to the 'paperless office' story. It has more to do with the initial framing of the problem than the accuracy of predictions (on both sides).

Yes that would be another way of putting it.


I always find it funny how the media portrays very intelligent machines (I.E. HAL from 2001, Mother from Aliens, etc) using already obsolete hardware.
 
Ray Kurzweil gives a pretty good argument in "The Age of Intelligent Machines" that because of moore's law and advancements in AI and neurology, this will happen in 20-30 years.

I read quite a lot about Ray K., but I have the impression he is kind of a dreamer.
His theory of the singularity is not really backed by any data, and he makes lots of assumptions

In the case of some human brains, it happened around 1960.

:D

Well... you have to be VERY careful with what you mean by "simulate". If you're talking about duplicating the processes that make up human self-awareness and intelligence

Yes

- it may not happen for a very long time, if ever.

Why?

People talk about mapping out the brain and determining where thoughts, feelings and so forth come from - and on a macro basis, they may be able to do that, identify regions of the brain that are most active while thinking or feeling, etc. However, that's an incredibly far cry from understanding the detailed interactions in those regions that produce the result - let alone the combination of processes that produces self-awareness, the ability to imagine and/or create, and so forth. In example, we can't even hook up visual technology to the human brain at the same level of quality our eyes provide. (At least, at the moment.) And that's just input to the brain and intelligence residing in there. The processes that make "you" exist are much more complex and at this time, a mystery. (Yes, inroads are being made, but very slowly - and at the moment, there's more speculation than understanding. :))

Until the precise mechanisms of each phenomenon are fully understood... why not? How could you accurately distinguish the two?

Basically, it is all made by neurons and synapses, right?
And, down the line, it is all about interactions of atoms and molecules.
The basic laws of those interactions have been found long ago.
So, where is the big conceptual obstacle, here?

But it may be possible to create artificial intelligence who's outward appearance is indistinguishable from human - even if the processes producing that appearance are utterly different from the ones employed in our intelligence.

Agreed with that.
Mine was a general statement, you can consider the human brain as a "black box" and just try to simulate it, looking at inputs and outputs.

Seems it is one thing to build a supercomputer capable of simulating the human brain, and a whole other issue programing it to actually simulate the human brain.

I agree with this. Is simulating the behaviour the same as the actual behaviour itself?

I do not get why such a difference, anyway..

There are significant differences between neurons and multi-neuron structures and anything in the silicon world -- thus a straight up comparison between transistors and neurons or synapses doesn't work.

Add to that the facts that many additional factors go into synaptic communication between neurons AND neurons function both computationally and as data storage, and you begin to see the problems.

The field of A.I. mostly concerns itself with finding ways to do things using the current silicon computing paradigm, that is "cpu + memory." I could be wrong, but I think the real simulation work (I.E. neural networks) has been all but abandoned for the time being because researchers realize that trying to simulate neurons with a cpu + memory is extremely inefficient and wasteful.

Thus, the first A.I. that can, for instance, pass a good Turing test, will be something that arises out of very non-human thought mechanisms. And, as a programmer who works on the cutting edge of computing technology, I would second the numbers put forth by DrBaltar -- expect to see such a thing in under 30 years. This is a very exciting time to be alive!

I remember having read quite something about Neural Networks about 20 years ago, then nothing.
I agree that the comparison between transistor and synapses is arbitrary and (partly) flawed, but, again, in order to build an artificial brain, you do not have really to copy it.
Just think, in order to do what the horse does, men have built the car, which is more efficient than horses (and eats lots less grass), but a car is in no way similar to a horse.
If you look at computer chess, modern algorithms do not work as human thinking will do, still they are far more efficient, and a modern chess computer, can beat almost all human players.

I don't think it will take very long at all. Already A.I. designers are seeing their programs do very odd things. Fans of the old game, "Black & White" used to have websites put up where they posted about the quirky things the pet's AI made them do. One of the designers had stated (Reference not handy, if demanded I will look for it) that "One time I was picking up fences and placing them to keep a herd of sheep confined. My creature was unhappy my attention was diverted elsewhere, so it kicked in the fence and killed every single sheep." Also I'd read where two of the game engineers were running a LAN game to see how the creatures would interact from each player. Looking for bugs and the like, QC, I imagine. The two creatures had started to play catch with a rock. A person walked in front of the one creature, and it stopped to look at it. It then missed catching the rock, which bounced off of its head. It immediately started pulling trees out of the ground, and tossing them on the rock. The developers were confused, it wasn't programmed behaviour. Then the creature casted a "fireball" miracle on the trees, and made a nice little bonfire. It stood and watched for a bit, and then walked right into it and picked up the now-glowing-red-hot rock. The creature caught fire, but it turned around and threw the rock at the other creature, who had been patiently waiting to resume their game. It caught the red-hot rock, and immediately caught fire. The motive appeared to be revenge.

Creepy.

You may want to read up on the criticisms of AI research - the same 2-3 decades predictions have been made since the 1950s. :)

It seems that AI, to become real, is taking a lot more than predicted.
I have read wuite a few books published decades ago, which were claiming that the AI revolution was near.
Still, BlueGene can not be programmed to recognize the difference between a bottle of wine and a bottle of water
More here:
In 1957 Herbert Simon said that within 10 years, a digital computer would be the world's chess champion.
http://www.geocities.com/SiliconValley/Lab/7378/comphis.htm
It took 40-50 years, not 10 years.
 

Back
Top Bottom