DC
Banned
- Joined
- Mar 20, 2008
- Messages
- 23,064
My bet is on cockroaches or viruses ...intelligent beings? thus not Humans![]()
It doesn't, actually.
In fact, though I don't even know that there IS a definition for "strong AI" that I'd be entirely comfortable with (which is a lot of why I have made no attempt to define it here), I'd be VERY impressed with a computer program that could act as a decent natural language parser -- even if I didn't have any good reason to believe that it ever "felt" a thing, or was capable of "philosophizing about its own existence" or "caring for the welfare of other beings".
But it does mean we can't build birds.
Most pertinent to this discussion are that we don't understand how the brain produces intelligence, or consciousness, or how to measure those, or even how to define them.
You are defaulting to intuitive notions about these -- which is more than a little ironic in this context, since a good definition for "intuition" is: knowing something without knowing how you know it.
Because you came right out of the gate with some assumptions which bear closer examination, and a lot of that work has already been done by others. I was kind of hoping that you'd look at some of the things in the "See also" section; qualia, Searle's Chinese Room, etc. I could as easily have started with any of those, as they all link to one another. If you do that, and if you find that it is indeed a quagmire, don't say I didn't warn you.So why bring up philosophical zombies?
Trivially true, as even the simplest human brain confers "human-level intelligence", by definition.All I'm saying is that human-level intelligence may not require the complexity of a human brain
Building a plane doesn't require the complexity of a bird because what a plane does is nowhere near as complex as what a bird does -- even if we concern ourselves only with that one particular aspect of bird behavior we call "flying"....just like building a plane doesn't require the complexity of a bird.
We also know (at least some of us do) that "IQ", or "intelligence quotient" is at best a very dubious measure. One could identify some quality -- "sensitivity", perhaps, or "asthetic appreciation" -- and then create a battery of tests for that quality -- but it would involve making a great many assumptions at every step of the way. Critics (Stephen Jay Gould, for instance) might question the degree to which your results were artifacts of the methodology, and even whether methodology itself was merely the formalizing of some false assumptions and biases.We know that increased working memory tends to correlate with higher IQ.
Well, if you could jump high enough to escape Earth's gravity, you might fly to the moon, too. Try this analogy:Unless intelligence requires mechanisms at a quantum mechanical level, or from a soul, or some aether [] then copying a brain is sufficient to replicate the mechanisms that result in intelligence.
Because you came right out of the gate with some assumptions which bear closer examination, and a lot of that work has already been done by others. I was kind of hoping that you'd look at some of the things in the "See also" section; qualia, Searle's Chinese Room, etc. I could as easily have started with any of those, as they all link to one another. If you do that, and if you find that it is indeed a quagmire, don't say I didn't warn you.
If you want to take a "functionalist" approach and say, "If it walks like a duck and quacks like a duck that's close enough, because duck-like walking/quacking is exactly the property I'm looking for, then that's fine. But when you start setting the bar at a central "director" which experiences consciousness and all that, you've left the realm of science far behind and are banging into some of the toughest problems in philosophy (or is it neuroscience? No widespread agreement even on that).
Trivially true, as even the simplest human brain confers "human-level intelligence", by definition.
Building a plane doesn't require the complexity of a bird because what a plane does is nowhere near as complex as what a bird does -- even if we concern ourselves only with that one particular aspect of bird behavior we call "flying".
We also know (at least some of us do) that "IQ", or "intelligence quotient" is at best a very dubious measure. One could identify some quality -- "sensitivity", perhaps, or "asthetic appreciation" -- and then create a battery of tests for that quality -- but it would involve making a great many assumptions at every step of the way. Critics (Stephen Jay Gould, for instance) might question the degree to which your results were artifacts of the methodology, and even whether methodology itself was merely the formalizing of some false assumptions and biases.
Well, if you could jump high enough to escape Earth's gravity, you might fly to the moon, too.
If someone gave you a number a quintillion digits long and told you that it was the product of two primes, then finding those prime factors would be a well-defined problem; one with only a single correct solution, attainable by a discrete series of steps. That doesn't mean you'd actually be able to solve it within any time frame that would make it a solveable problem. On the other hand, if someone gave you a one digit number and told you that it was derived by adding together the digits of a thousand digit number and repeating the process with the result until only a single digit remained, identifying the original number would not be possible either in practice or in principle, because no matter how many solutions you found, you'd have no way of knowing which one was correct.
You seem to see the "strong AI" challenge as more like the first of those; it's just a matter of time and resources. Unless you identify the objective a lot more narrowly than you have, I see it as more like the second.
Someone said evolution, right?
~~ Paul
Heh. Well, not necessarily, but I'm not sure this is the thread to get into all that.For example, in Searle's Chinese Room, the fact that there's a person in the room causes an infinite regress.
Getting better. "Flexible and creative approaches to pattern recognition and problem solving", maybe? Isn't it the idiotically pedandic, insanely methodical approach a computer program takes that makes it such an unsatisfying substitute for even a not-particularly-bright human? When I start thinking about "flexibility", I find myself wondering how rigidly that should be defined. And I remember that Jung said: "Mental health is characterized by flexibility". And I wonder if we could come up with a battery of tests that could be used to measure a person's "mental flexibility quotient".I'm interested in human-level intelligence with or without consciousness.
In my opinion, those are roughly equally likely.What's more likely? That you could jump high enough to escape Earth's gravity, or that in about 10 years (15-20 realistically) we'll have the ability to map a human brain to a neural network?
I'm suggesting that whether or not an objective is possible to achieve can depend a lot on how the objective is defined.What makes you think that creating strong AI is impossible? At the very least that's what you seem to be suggesting with your last analogy.
I find that comment much more relevant to this discussion than you may have intended. We can make up analogies all day -- and we DO make up analogies all day; and doing so IS very useful; and for exactly the reasons you mention. I think I'd be happy to accept as "intelligent" any agent able to consistently "make up [or grasp] analogies that simplify reality in a relevant and practical manner".We can make up analogies all day, but they are not very useful if they don't simplify reality in a relevant and practical manner.
Because I what I see you doing is defining the objective as replicating the end results of a series of processes you can't possibly know anything about. Creating self-replicating machines and turning them loose in the hope that, through an evolutionary process, they might eventually replicate those results is different. It might work, but whatever emerged wouldn't really be artificial, would it? Wouldn't it be a natural intelligence, simply one that was based on a different type of substrate?It's not clear to me why you see the creation of strong AI as similar to your second analogy.
Heh. Well, not necessarily, but I'm not sure this is the thread to get into all that.
Getting better. "Flexible and creative approaches to pattern recognition and problem solving", maybe? Isn't it the idiotically pedandic, insanely methodical approach a computer program takes that makes it such an unsatisfying substitute for even a not-particularly-bright human? When I start thinking about "flexibility", I find myself wondering how rigidly that should be defined. And I remember that Jung said: "Mental health is characterized by flexibility". And I wonder if we could come up with a battery of tests that could be used to measure a person's "mental flexibility quotient".
In my opinion, those are roughly equally likely.
I'm suggesting that whether or not an objective is possible to achieve can depend a lot on how the objective is defined.
I find that comment much more relevant to this discussion than you may have intended. We can make up analogies all day -- and we DO make up analogies all day; and doing so IS very useful; and for exactly the reasons you mention. I think I'd be happy to accept as "intelligent" any agent able to consistently "make up [or grasp] analogies that simplify reality in a relevant and practical manner".
Because I what I see you doing is defining the objective as replicating the end results of a series of processes you can't possibly know anything about.
Creating self-replicating machines and turning them loose in the hope that, through an evolutionary process, they might eventually replicate those results is different.
It might work, but whatever emerged wouldn't really be artificial, would it? Wouldn't it be a natural intelligence, simply one that was based on a different type of substrate?
Pretty narrow in scope. Bongard problems are better, but the sort of flexibility I was thinking of is more along the lines of Hofstadter-esque "slippability". Scoring that is not something that lends itself well to a simple pass-or-fail test; it's more of a subjective judgement call. Without a doubt, too much flexibility can be just as insane as too much rigidity -- but the quality that defines a human-like approach to problem-solving is not always the elegance of the solutions; sometimes it's the elegance of the mistakes.Tests like the Wisconsin card sort.
Now that you mention it, if we could build a mechanical rat that could navigate unfamiliar environments and demonstrate the learning and problem solving capacities of a real rat (or, for that matter, a cockroach), we'd be nine tenths of the way toward developing something with human-like capacities for those. If progress toward that goal were the only motivation for doing all this mapping, I'd have to say that it still looks to me like so much jumping at the moon.We have mapped a rat neocortical column in 2006. It is improbable that we can't do better already.
I don't see it as quite that simple. Asking "what in the heck was this thing designed to do?" is not the same as asking "how does this thing do the things it does?" -- and it can be very difficult to answer the second question until you're sure you know the answer to the first. If it turns out that in addition to doing the things the device was designed to do, it also does some things it was not explicitly designed to do -- and that those are the things that you find to be of the most interest -- then there's a good chance that you might replicate the explicit features and still not capture those interesting side-effects.That's how reverse-engineering works.
The evolutionary approach immediately encounters another question (currently being discussed in another thread), one which has to do with a commonly held view that the emergence of some life form with human level (if not human-like) intelligence was "inevitable" -- a throwback to outdated notions about humans representing the pinacle of creation and all that, in my opinion.That's what this thread is about: discussing how strong AI can be created.
Genetic algorithms can be used to search for solutions that already exist within a well-defined domain, and even then they aren't guaranteed to find THE optimal solution. If anyone has had any luck getting them to do things like redefine the problem, create new parameters, or extend the boundaries of the domain, I'd be very interested in seeing it.There are many ways to use evolution to "search" for AI