• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What will it take to create strong AI?

It doesn't, actually.

So why bring up philosophical zombies?

In fact, though I don't even know that there IS a definition for "strong AI" that I'd be entirely comfortable with (which is a lot of why I have made no attempt to define it here), I'd be VERY impressed with a computer program that could act as a decent natural language parser -- even if I didn't have any good reason to believe that it ever "felt" a thing, or was capable of "philosophizing about its own existence" or "caring for the welfare of other beings".

I would be impressed with an advanced natural language parser as well.

But it does mean we can't build birds.

All I'm saying is that human-level intelligence may not require the complexity of a human brain, just like building a plane doesn't require the complexity of a bird.

Most pertinent to this discussion are that we don't understand how the brain produces intelligence, or consciousness, or how to measure those, or even how to define them.

We know that increased working memory tends to correlate with higher IQ. We also have used Restricted Boltzmann Machines to detect categories with minimal supervision. One could say that advances in machine learning like this one help us understand the brain.

You are defaulting to intuitive notions about these -- which is more than a little ironic in this context, since a good definition for "intuition" is: knowing something without knowing how you know it.

That's not accurate. Unless intelligence requires mechanisms at a quantum mechanical level, or from a soul, or some aether (that's not what you are suggesting, is it?) then copying a brain is sufficient to replicate the mechanisms that result in intelligence.
 
So why bring up philosophical zombies?
Because you came right out of the gate with some assumptions which bear closer examination, and a lot of that work has already been done by others. I was kind of hoping that you'd look at some of the things in the "See also" section; qualia, Searle's Chinese Room, etc. I could as easily have started with any of those, as they all link to one another. If you do that, and if you find that it is indeed a quagmire, don't say I didn't warn you.

If you want to take a "functionalist" approach and say, "If it walks like a duck and quacks like a duck that's close enough, because duck-like walking/quacking is exactly the property I'm looking for, then that's fine. But when you start setting the bar at a central "director" which experiences consciousness and all that, you've left the realm of science far behind and are banging into some of the toughest problems in philosophy (or is it neuroscience? No widespread agreement even on that).

All I'm saying is that human-level intelligence may not require the complexity of a human brain
Trivially true, as even the simplest human brain confers "human-level intelligence", by definition.

...just like building a plane doesn't require the complexity of a bird.
Building a plane doesn't require the complexity of a bird because what a plane does is nowhere near as complex as what a bird does -- even if we concern ourselves only with that one particular aspect of bird behavior we call "flying".

We know that increased working memory tends to correlate with higher IQ.
We also know (at least some of us do) that "IQ", or "intelligence quotient" is at best a very dubious measure. One could identify some quality -- "sensitivity", perhaps, or "asthetic appreciation" -- and then create a battery of tests for that quality -- but it would involve making a great many assumptions at every step of the way. Critics (Stephen Jay Gould, for instance) might question the degree to which your results were artifacts of the methodology, and even whether methodology itself was merely the formalizing of some false assumptions and biases.

Unless intelligence requires mechanisms at a quantum mechanical level, or from a soul, or some aether [] then copying a brain is sufficient to replicate the mechanisms that result in intelligence.
Well, if you could jump high enough to escape Earth's gravity, you might fly to the moon, too. Try this analogy:

If someone gave you a number a quintillion digits long and told you that it was the product of two primes, then finding those prime factors would be a well-defined problem; one with only a single correct solution, attainable by a discrete series of steps. That doesn't mean you'd actually be able to solve it within any time frame that would make it a solveable problem. On the other hand, if someone gave you a one digit number and told you that it was derived by adding together the digits of a thousand digit number and repeating the process with the result until only a single digit remained, identifying the original number would not be possible either in practice or in principle, because no matter how many solutions you found, you'd have no way of knowing which one was correct.

You seem to see the "strong AI" challenge as more like the first of those; it's just a matter of time and resources. Unless you identify the objective a lot more narrowly than you have, I see it as more like the second.
 
Because you came right out of the gate with some assumptions which bear closer examination, and a lot of that work has already been done by others. I was kind of hoping that you'd look at some of the things in the "See also" section; qualia, Searle's Chinese Room, etc. I could as easily have started with any of those, as they all link to one another. If you do that, and if you find that it is indeed a quagmire, don't say I didn't warn you.

OK, that's fair enough. However, there are problems with many of these thought experiments. For example, in Searle's Chinese Room, the fact that there's a person in the room causes an infinite regress. As for qualia, those are just the shades of experience that we cannot communicate to others because they are too complex to do so accurately, and because our senses differ physically however small the variations, resulting in different sensations.

If you want to take a "functionalist" approach and say, "If it walks like a duck and quacks like a duck that's close enough, because duck-like walking/quacking is exactly the property I'm looking for, then that's fine. But when you start setting the bar at a central "director" which experiences consciousness and all that, you've left the realm of science far behind and are banging into some of the toughest problems in philosophy (or is it neuroscience? No widespread agreement even on that).

I don't want to take a merely functionalist approach. I'm interested in human-level intelligence with or without consciousness. But if we can work-out the processes that result in consciousness, or the sensation of free-will, then all the better.

Trivially true, as even the simplest human brain confers "human-level intelligence", by definition.

I'll be more specific then, average human-level intelligence may not require the complexity of a human brain.

Building a plane doesn't require the complexity of a bird because what a plane does is nowhere near as complex as what a bird does -- even if we concern ourselves only with that one particular aspect of bird behavior we call "flying".

Sure.

We also know (at least some of us do) that "IQ", or "intelligence quotient" is at best a very dubious measure. One could identify some quality -- "sensitivity", perhaps, or "asthetic appreciation" -- and then create a battery of tests for that quality -- but it would involve making a great many assumptions at every step of the way. Critics (Stephen Jay Gould, for instance) might question the degree to which your results were artifacts of the methodology, and even whether methodology itself was merely the formalizing of some false assumptions and biases.

I agree, but I'm not sure where you are going with that line of thought.

Well, if you could jump high enough to escape Earth's gravity, you might fly to the moon, too.

What's more likely? That you could jump high enough to escape Earth's gravity, or that in about 10 years (15-20 realistically) we'll have the ability to map a human brain to a neural network? I think the latter is more realistic and that your analogy is not adequate.

If someone gave you a number a quintillion digits long and told you that it was the product of two primes, then finding those prime factors would be a well-defined problem; one with only a single correct solution, attainable by a discrete series of steps. That doesn't mean you'd actually be able to solve it within any time frame that would make it a solveable problem. On the other hand, if someone gave you a one digit number and told you that it was derived by adding together the digits of a thousand digit number and repeating the process with the result until only a single digit remained, identifying the original number would not be possible either in practice or in principle, because no matter how many solutions you found, you'd have no way of knowing which one was correct.

What makes you think that creating strong AI is impossible? At the very least that's what you seem to be suggesting with your last analogy. We can make up analogies all day, but they are not very useful if they don't simplify reality in a relevant and practical manner.

You seem to see the "strong AI" challenge as more like the first of those; it's just a matter of time and resources. Unless you identify the objective a lot more narrowly than you have, I see it as more like the second.

It's not clear to me why you see the creation of strong AI as similar to your second analogy. I'm willing to concede that strong AI cannot be developed, but it's going to take a strong argument, not just the assertion that it is impossible.
 
Last edited:
For example, in Searle's Chinese Room, the fact that there's a person in the room causes an infinite regress.
Heh. Well, not necessarily, but I'm not sure this is the thread to get into all that.

I'm interested in human-level intelligence with or without consciousness.
Getting better. "Flexible and creative approaches to pattern recognition and problem solving", maybe? Isn't it the idiotically pedandic, insanely methodical approach a computer program takes that makes it such an unsatisfying substitute for even a not-particularly-bright human? When I start thinking about "flexibility", I find myself wondering how rigidly that should be defined. And I remember that Jung said: "Mental health is characterized by flexibility". And I wonder if we could come up with a battery of tests that could be used to measure a person's "mental flexibility quotient".

What's more likely? That you could jump high enough to escape Earth's gravity, or that in about 10 years (15-20 realistically) we'll have the ability to map a human brain to a neural network?
In my opinion, those are roughly equally likely.

What makes you think that creating strong AI is impossible? At the very least that's what you seem to be suggesting with your last analogy.
I'm suggesting that whether or not an objective is possible to achieve can depend a lot on how the objective is defined.

We can make up analogies all day, but they are not very useful if they don't simplify reality in a relevant and practical manner.
I find that comment much more relevant to this discussion than you may have intended. We can make up analogies all day -- and we DO make up analogies all day; and doing so IS very useful; and for exactly the reasons you mention. I think I'd be happy to accept as "intelligent" any agent able to consistently "make up [or grasp] analogies that simplify reality in a relevant and practical manner".

It's not clear to me why you see the creation of strong AI as similar to your second analogy.
Because I what I see you doing is defining the objective as replicating the end results of a series of processes you can't possibly know anything about. Creating self-replicating machines and turning them loose in the hope that, through an evolutionary process, they might eventually replicate those results is different. It might work, but whatever emerged wouldn't really be artificial, would it? Wouldn't it be a natural intelligence, simply one that was based on a different type of substrate?
 
Heh. Well, not necessarily, but I'm not sure this is the thread to get into all that.

No, I think this thread needs a supplemental fork in the philosophy section.

Getting better. "Flexible and creative approaches to pattern recognition and problem solving", maybe? Isn't it the idiotically pedandic, insanely methodical approach a computer program takes that makes it such an unsatisfying substitute for even a not-particularly-bright human? When I start thinking about "flexibility", I find myself wondering how rigidly that should be defined. And I remember that Jung said: "Mental health is characterized by flexibility". And I wonder if we could come up with a battery of tests that could be used to measure a person's "mental flexibility quotient".

Tests like the Wisconsin card sort.

In my opinion, those are roughly equally likely.

We have mapped a rat neocortical column in 2006. It is improbable that we can't do better already.

I'm suggesting that whether or not an objective is possible to achieve can depend a lot on how the objective is defined.

OK, I misunderstood originally.

I find that comment much more relevant to this discussion than you may have intended. We can make up analogies all day -- and we DO make up analogies all day; and doing so IS very useful; and for exactly the reasons you mention. I think I'd be happy to accept as "intelligent" any agent able to consistently "make up [or grasp] analogies that simplify reality in a relevant and practical manner".

All of our analogies are anthropocentric in nature. We would need to impart human experience onto the putative AI. That would require a massive expert system (see Cyc) or a body functionally similar to that of a human, with eyes, ears, and the ability to move.

Because I what I see you doing is defining the objective as replicating the end results of a series of processes you can't possibly know anything about.

That's how reverse-engineering works.

Creating self-replicating machines and turning them loose in the hope that, through an evolutionary process, they might eventually replicate those results is different.

That's what this thread is about: discussing how strong AI can be created.

It might work, but whatever emerged wouldn't really be artificial, would it? Wouldn't it be a natural intelligence, simply one that was based on a different type of substrate?

There are many ways to use evolution to "search" for AI, but whether the result is natural or artificial is just semantics. I would propose that if we promoted such evolution, then it should be called artificial. But if you want to call it natural, then that's still what I'm looking for - as long as we can artificially accelerate and amplify its learning abilities.
 
Tests like the Wisconsin card sort.
Pretty narrow in scope. Bongard problems are better, but the sort of flexibility I was thinking of is more along the lines of Hofstadter-esque "slippability". Scoring that is not something that lends itself well to a simple pass-or-fail test; it's more of a subjective judgement call. Without a doubt, too much flexibility can be just as insane as too much rigidity -- but the quality that defines a human-like approach to problem-solving is not always the elegance of the solutions; sometimes it's the elegance of the mistakes.

We have mapped a rat neocortical column in 2006. It is improbable that we can't do better already.
Now that you mention it, if we could build a mechanical rat that could navigate unfamiliar environments and demonstrate the learning and problem solving capacities of a real rat (or, for that matter, a cockroach), we'd be nine tenths of the way toward developing something with human-like capacities for those. If progress toward that goal were the only motivation for doing all this mapping, I'd have to say that it still looks to me like so much jumping at the moon.

That's how reverse-engineering works.
I don't see it as quite that simple. Asking "what in the heck was this thing designed to do?" is not the same as asking "how does this thing do the things it does?" -- and it can be very difficult to answer the second question until you're sure you know the answer to the first. If it turns out that in addition to doing the things the device was designed to do, it also does some things it was not explicitly designed to do -- and that those are the things that you find to be of the most interest -- then there's a good chance that you might replicate the explicit features and still not capture those interesting side-effects.

That's what this thread is about: discussing how strong AI can be created.
The evolutionary approach immediately encounters another question (currently being discussed in another thread), one which has to do with a commonly held view that the emergence of some life form with human level (if not human-like) intelligence was "inevitable" -- a throwback to outdated notions about humans representing the pinacle of creation and all that, in my opinion.

There are many ways to use evolution to "search" for AI
Genetic algorithms can be used to search for solutions that already exist within a well-defined domain, and even then they aren't guaranteed to find THE optimal solution. If anyone has had any luck getting them to do things like redefine the problem, create new parameters, or extend the boundaries of the domain, I'd be very interested in seeing it.
 

Back
Top Bottom