• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Immortality 1.0

I guess I've based my skepticism of the advent of AI mostly on this article:

http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html

Has significant progress been made since?
Well, projects like Blue Brain, a molecular-level simulation of part of a rat brain, are indeed a significant advance.

But for the most part, that article is arguing against a straw man version of AI research, and throwing out red herrings. Combinatorial explosion means that we can exhaustively test a neural network? Don't do that then. (The brain certainly doesn't!) Don't know the exact chemical interactions that make synapses work? Find out.

If you narrow down the field of AI to exclude all the work that's going on to solve the specific issues you're raising, then it follows that no work is going on to solve those issues. The problem is, that's not actually true.
 
Sure we do.

No, we don't. We've got suspicions, but we really don't know.

A neural-net simulation will work

That's a pretty damned broad category, covering a rather wide range of complexities. One could simulate any number of neurons with any number of connections, but if the modeling for each neuron isn't good enough, you'll never get anything resembling a human brain. And we really don't know what "good enough" means at this point.

Look, I'm not saying that what those folks are doing is pointless. It's not. They could learn quite a bit. But the fact that they're reaching towards a particular goal doesn't mean that we've got any real idea of how far away that goal is. I really don't think we do. Hell, I don't even believe that they believe everything they say in regards to what their work is likely to produce. And that's not a slur against them, just a statement about the way that science is often sold to the public at large: people sell their work on the most optimistic possible outcome, not necessarily on what they think is most likely.
 
I think the focus on computing power is fairly pointless really. Sure, we don't know exactly how it's going to progress, but we're pretty certain that it will carry on progressing, and that at some probably not too distant point in the future we will have the computing power available to simulate a human brain. But we have absolutely no idea how to even think about copying a human conciousness to a computer. It's not a question of how long it might take to get there, we simply have no idea if such a thing is even possible, let alone how me might go about doing it.

Computing power is not all it takes to get AI. AI is not all it takes to be able to copy and paste humans. The conclusion that in 40 years we'll be able to save humans to computers because computers are faster is about as valid as saying that in 40 years we'll be living in other galaxies because cars are faster.
 
We may get smart enough to lower the median expected age of death. A world of mostly old people and very few babies, and/or extreme over-population will be most unpleasant.
 
Not to say that computers can't simulate human output remarkably well, to the point of indistinguishable. But this appears to subtly confuse simulating consciousness/brains (simulating "Jim") with actually downloading "Jim"/people's consciousness/brains. Those are very different questions, and the latter will require a lot more thinking through, clearly. At the moment, though, this is a rather spectacular (if not chimerical) claim. But I understand our human tendency to get caught up in a moment of high enthusiasm.
 
Well, no. You can talk about this "logarithmic curve of advancement" because you've got a way of measuring progress after the fact. But how do you measure the distance to some advancement when you don't even know what steps must be taken to reach that advancement? You can't.

I can make predictions about how many transistors will fit on a chip by 2050, I can make predictions about how many FLOPS such a chip will be capable of performing, etc. And I can have some reasonable confidence that it will happen, even if not on schedule, because I know the relevant metrics, and it's a clear continuation of a process that already has an extensive period of progress along those metrics. But how many cores does it take to simulate a human brain? We've got no bloody idea. We've got no way of forming sensible metrics to measure the distance between here and there, and hence no history of progress along those metrics.

No, the curve applies to everything, because everything is linked.

Our cores get faster and denser --> we can compute faster --> we communicate better and simulate better and model better --> our science research proceeds faster --> our design and manufacturing capability gets better --> back to step 1.

Ask any lab researcher, anywhere, in any field, whether things are proceeding at an exponential pace. I mean, we have AI robots making actual discoveries in biology now and we have biological neurons controlling machines.

This is a very exciting time to live, if you just look around you, because of how fast progress is becoming in all fields.

And how do you know which advances just take longer, and which advances just don't happen? Where's my flying car?

The truth is, I'll probably never get a flying car, no matter how long I live. And I bet I'll never be able to upload my consciousness either.

Well, the general formula is that if X will make someone filthy rich, and X makes sense, then X will be for sale at some point.

Flying cars would make tons of money, except they don't make sense. Why? Because 1) people are utterly stupid and would just kill themselves even faster than they do with normal cars and 2) if a flying car breaks down you die, whereas if a normal car breaks down you just sit there waiting for a tow truck and 3) flying cars with current technology use huge amounts of fuel compared to normal cars.

The same is not true of upload tech -- it makes perfect sense, even if only for temporary use in gaming. You are aware, are you not, that gaming is on track to be the most profitable product in the entertainment industry? $$$$$$$$$$$$$
 
Last edited:
We will certainly have the computing power. Simulating a brain on a computer is a difficult problem, bot not an insoluble one. Uploading the contents of your brain to that computer, though, will be very very difficult.

Perhaps less so for some brains than others?:boxedin:
 
No, the curve applies to everything, because everything is linked.

An exponentially proceeding advancement doesn't tell you much if you don't know how far away your goal is. That was my point, and nothing about the universality of that advancement changes that point.

Well, the general formula is that if X will make someone filthy rich, and X makes sense, then X will be for sale at some point.

Flying cars would make tons of money, except they don't make sense.
...
The same is not true of upload tech -- it makes perfect sense, even if only for temporary use in gaming.

You say it makes perfect sense, but you've got no idea how it would actually work (what makes you think it's going to be reversible?), what it would do to you, and what the experience would be like. Sorry, but you really can't justify that conclusion at this point. At this point, we've got no idea what downloading your brain into a computer would really mean. Oh, you may have ideas about what you'd like it to mean, but just like the flying car, reality may not play along.

if a flying car breaks down you die

And if there's a bug in the computer program, you go insane.
 
Not to say that computers can't simulate human output remarkably well, to the point of indistinguishable.

Can they really now? Could you give me an example? I had a "chat" a couple of years ago with a conversation bot that won the Turing prize that year - I can't remember what it was called. But it was nowhere near indistinguishable from human.

But I understand your human tendency to get caught up in a moment of high enthusiasm.

Oh... my... ;)
 
Can they really now? Could you give me an example? I had a "chat" a couple of years ago with a conversation bot that won the Turing prize that year - I can't remember what it was called. But it was nowhere near indistinguishable from human.

The trick is to pick the right human to compare them to. This may appear to be an obvious computer, but compared to something like this, it's the height of coherent rationality.

The more popular the internet becomes, the more clear it is that a program that does nothing more than randomly string together ASCII characters could easily pass for human. Even having them make real words is not necessarily a requirement.
 
I think we'll find out in 40 years whether or not he's correct in his prediction.
 
The blue-brain simulation is at molecule level?

BTW: I'm opposed to the blue-brain simulation because a duplicate of a human brain would behave just like a brain and would possess sentience and it would be immoral to create such a being for an experiment, regardless of how much would be learned from it.
 
Last edited:
"'The new PlayStation is 1 per cent as powerful as a human brain,' he said."

We'll just take his word for that, shall we?
 
The trick is to pick the right human to compare them to. This may appear to be an obvious computer, but compared to something like this, it's the height of coherent rationality.

The more popular the internet becomes, the more clear it is that a program that does nothing more than randomly string together ASCII characters could easily pass for human. Even having them make real words is not necessarily a requirement.

Computer output that's indistinguishable from a nutjob is hardly an achievement. But yes, I take your rather depressing point. It would be fairly easy to emulate some of the posters here. Mozina would be really easy. Just program it to ignore input and respond with a rant about Godflation. Hey! We should have a little mini-JREF Turing prize competition! Whose woobot/crackbot is most like a real woo/crackpot?
 
Being unable to tell the difference between a human and a bot could have enormous applications. Imagine having a problem with some software. Easy solution is to go to their on-line help page and type in the problem. The bot looks at it, understands it, finds the answer and gives you the answer. Any information not given to the bot can be asked by the bot. If the bot does not know the answer then it gives the problem to a human.
 
Turing guessed that humanity would be ripe for the uploading, or simulation, anyway, when the processing power/storage/something, I forget which, approached that of a real brain.

We're closing in on that, in total processing, though not the massive parallelism of course. But the problem of "programming" may be a bit stickier, to say the least.

There's also the issue that consciousness, a real, physical phenomenon, arises out of real world physics somehow, which is to say, real atoms, electrons, energy, and so forth, and not just from the "data pushing" that the brain also does.

Therefore a simulation of the brain qua neurons doing data processing will probably miss the actual consciousness bit, and thus fail beyond a low-level (very low-level) animal non-conscious activity.

In other words, though consciousness may be used to data process, it is not a thing derived from data processing itself, any more than a leg or an eye is.
 
Assuming it will be possible, what's in it for me? So a simulation of my brain and conscious lives on forever, which is cool, but I still die.

One way might be to replace one part of my brain at a time with electronic simulations. But at some point it seems it wouldn't be me anymore. I wonder at what point that would be?
 

Back
Top Bottom