• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Immortality 1.0

dogjones

Graduate Poster
Joined
Oct 3, 2005
Messages
1,303
http://www.guardian.co.uk/science/2005/may/22/theobserver.technology

According to this guy, in 40 years we'll all be able to upload ourselves into a computer and live forever.

This reminds me of the whole AI thing, which has apparently been "just around the corner" for 50 odd years.

Thoughts?

This is most amusing extrapolation:

He believes that today's youngsters may never have to die, and points to the rapid advances in computing power demonstrated last week, when Sony released the first details of its PlayStation 3. It is 35 times more powerful than previous games consoles. 'The new PlayStation is 1 per cent as powerful as a human brain,' he said. 'It is into supercomputer status compared to 10 years ago. PlayStation 5 will probably be as powerful as the human brain.'
 
Well I am working on an 8 core machine as we speak, I am in line for a 16 core machine, and I hear that the next iphone is going to have multiple cores, which was state of the art for a desktop only 3 years ago.

Here is the thing about predictions by the A.I. community (and futurists, for that matter) -- they are affected by the logarithmic curve of advancement just like any everything else.

That means that if the optimists of today (like myself) are off on our predictions by the same amount as the optimists of yesteryear -- and there is no reason to think we are any more wrong than they were, given how much more we know about recognizing when we are wrong -- then the error of our predictions will translate into an exponentially smaller timescale than the error of the last generation's predictions.

Does that make sense? In other words, while theorists of the '60s might have been off by 100 years in their predictions, and we are just as wrong as them, we might be off by only a decade or so.

Another way to look at it is that any predictions are starting to become meaningless as we approach asymptotic progress as a species. Even that guy in the article is wrong, I think, about his 2075 prediction for when uploads become cheap enough for everyone to do it -- because after a few smart programmers upload and then bootstrap themselves into godlike intelligence a singularity will occur and all bets are off at that point.
 
Last edited:
http://www.guardian.co.uk/science/2005/may/22/theobserver.technology

According to this guy, in 40 years we'll all be able to upload ourselves into a computer and live forever.

Not likely. This seems like a classic example of unreasonable extrapolation. Because the obvious pieces involved in the problem are familiar to us (ie, we know about brians, we know about computers, we know that computers keep getting better), the actual complexity and difficulty of the problem is easy to ignore.

If you asked people in 1900 if we'd cure cancer or land on the moon by the year 2000, I suspect more of them would have said the former than the latter, because they were familiar with cancer, they were familiar with doctors curing diseases, so why not? But travel to the moon? That would be pure science fiction. But in point of fact, flying to the moon was a far simpler problem than curing cancer, which is why we've done the former but not the latter.

Artificial intelligence of any kind (let alone transferring a human "consciousness" to a computer) strikes me as a cure for cancer problem not a fly to the moon problem: it's so complex, we don't even know what would need to be done. And we almost certainly won't be able to do it within the next 40 years. By contrast, it WAS apparent pretty early in the 20th century what would need to be done to fly to the moon: very big rockets with efficient thrusts, air-tight capsules to keep people inside safe, etc. The details were difficult, but the task was obvious. Not so here.
 
Here is the thing about predictions by the A.I. community (and futurists, for that matter) -- they are affected by the logarithmic curve of advancement just like any everything else.

That means that if the optimists of today (like myself) are off on our predictions by the same amount as the optimists of yesteryear -- and there is no reason to think we are any more wrong than they were, given how much more we know about recognizing when we are wrong -- then the error of our predictions will translate into an exponentially smaller timescale than the error of the last generation's predictions.

Well, no. You can talk about this "logarithmic curve of advancement" because you've got a way of measuring progress after the fact. But how do you measure the distance to some advancement when you don't even know what steps must be taken to reach that advancement? You can't.

I can make predictions about how many transistors will fit on a chip by 2050, I can make predictions about how many FLOPS such a chip will be capable of performing, etc. And I can have some reasonable confidence that it will happen, even if not on schedule, because I know the relevant metrics, and it's a clear continuation of a process that already has an extensive period of progress along those metrics. But how many cores does it take to simulate a human brain? We've got no bloody idea. We've got no way of forming sensible metrics to measure the distance between here and there, and hence no history of progress along those metrics.

Does that make sense? In other words, while theorists of the '60s might have been off by 100 years in their predictions, and we are just as wrong as them, we might be off by only a decade or so.

And how do you know which advances just take longer, and which advances just don't happen? Where's my flying car?

The truth is, I'll probably never get a flying car, no matter how long I live. And I bet I'll never be able to upload my consciousness either.
 
I think that eventually we'll figure out the reason for aging and develop a cure for it. People will still die, they'll get heart attacks and strokes and so on, but they won't get old. I also think that eventually we'll develop a compound that dissolves arterial plaque without harming vessel walls, and heart attacks will be a thing of the past too.

By "eventually", I mean within the next hundred years. I think it's entirely possible that the first human being to live to 150 has already been born. And, of course, medical science won't be standing still after that.

I think it's probably too late for me (I'm 46) but I have hope that my kids will live to see ages that we think impossible today.
 
Don't confuse computing power with artificial intelligence. Our fast computers are no closer to AI than the Atari 2600
 
Considering that my body has lasted a lot longer than any electronic device I've ever owned, I'll put my money on biology. I agree with jhunter that we'll probably have a biological way of substantially prolonging our lives before we are able to upload our thoughts into a computer.

Steve S.
 
I can make predictions about how many transistors will fit on a chip by 2050, I can make predictions about how many FLOPS such a chip will be capable of performing, etc. And I can have some reasonable confidence that it will happen, even if not on schedule, because I know the relevant metrics, and it's a clear continuation of a process that already has an extensive period of progress along those metrics. But how many cores does it take to simulate a human brain? We've got no bloody idea. We've got no way of forming sensible metrics to measure the distance between here and there, and hence no history of progress along those metrics.
Better tell these guys to pack it in and go home then.
 
http://www.guardian.co.uk/science/2005/may/22/theobserver.technology

According to this guy, in 40 years we'll all be able to upload ourselves into a computer and live forever.

This reminds me of the whole AI thing, which has apparently been "just around the corner" for 50 odd years.

Thoughts?
We will certainly have the computing power. Simulating a brain on a computer is a difficult problem, bot not an insoluble one. Uploading the contents of your brain to that computer, though, will be very very difficult.
 
The fact that they're trying says nothing about their chances of success. See: Biosphere II.
Oh, they may fail. Failure is always an option. But your larger point -

But how many cores does it take to simulate a human brain? We've got no bloody idea. We've got no way of forming sensible metrics to measure the distance between here and there, and hence no history of progress along those metrics.

Isn't true. We do have a way of forming sensible metrics, of estimating how many cores it will take (for a given type of simulation - neural net like Cat Brain or molecular like Blue Brain).

Biosphere II is a very good analogy: We know for sure it can be done, because we have a working example. We just need to find out how much we can simplify it before it stops working.
 
Isn't true. We do have a way of forming sensible metrics, of estimating how many cores it will take (for a given type of simulation - neural net like Cat Brain or molecular like Blue Brain).

That's the key: we've got no clue at this point what sort of simulation is good enough to reproduce anything resembling consciousness.
 
That's the key: we've got no clue at this point what sort of simulation is good enough to reproduce anything resembling consciousness.
Sure we do. Again, there may be some huge surprise and we turn out to be wrong, but there is no reason to think that a quantum-mechanical simulation is necessary. A neural-net simulation will work, but we need to know how to tune the neural net, and a molecular simulation is a good way to learn that, which is why the Blue Brain people are doing just that.

And that in turn is why they're working on a rat neocortex while their competition is doing a whole cat: It takes orders of magnitude more computing power. But it can be done, and we know how much computing power it will take to do it - again short of a huge surprise along the way.
 
Who would want to live inside a box? I personally would like to live a long time but not forever. If I could live say 1000 years in a healthy 25 year old body then cool but not forever.
 
Who would want to live inside a box? I personally would like to live a long time but not forever. If I could live say 1000 years in a healthy 25 year old body then cool but not forever.

Suppose it's a really, really big box?
 
Sure we do. Again, there may be some huge surprise and we turn out to be wrong, but there is no reason to think that a quantum-mechanical simulation is necessary. A neural-net simulation will work, but we need to know how to tune the neural net, and a molecular simulation is a good way to learn that, which is why the Blue Brain people are doing just that.

And that in turn is why they're working on a rat neocortex while their competition is doing a whole cat: It takes orders of magnitude more computing power. But it can be done, and we know how much computing power it will take to do it - again short of a huge surprise along the way.

I guess I've based my skepticism of the advent of AI mostly on this article:

http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html

Has significant progress been made since?
 

Back
Top Bottom