• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What will it take to create strong AI?

Evolution.
... plus emotion. Intellect is not just the processing of memory and sensory data, but it is a process that is fueled by emotions and nudged along its path by feelings. Emotion -- if only that of 'Wonder' -- underlies all of scientific inquiry. Take away an researcher's emotion, and what do you have? A very intelligent, yet unmotivated organic machine.

(In my humble opinion, of course. Your opinions may vary.)
 
... We are nowhere close, and aren't likely to be anytime soon. For some reason other AI researchers get unreasonably angry when I've made comments like that. The optimism is almost cult-like, and isn't really based on anything.

Faith. It's based on faith -- the belief in something that is improvable. Faith involves hope, and hope drives optimism. AI researchers may have faith in the "Human Spirit" or some such intangible concept. They may also have faith that the determined application of this "Human Spirit" can accomplish anything (e.g., "Anything the human mind can conceive is possible"). The clue is in their unreasonable anger in response to a challenge to their faith.

I can conceive of Heather Locklear giving me a sponge bath. While this may be possible, it is also extremely unlikely. So I'm smirking right along with you when you say that true AI is a long way off.
 
I understand what you are saying, but why would a simulation of the brain fail? (Other than it lacks the typical means of interaction with the outside world - which suggests we might end up with a brain in a coma).

Well, AI might eventually come about that way. But it's going to take a whole lot more than simulating each neuron and its connection to other neurons. Brain regions communicate with each other chemically as well. Plus, there are suggestions that brain stores information in unexpected ways, such as in the spin of atoms, taking advantage of the bizarre laws of quantum mechanics, which few humans understand.

I suspect the first efforts down these lines will result not in a brain-in-a-coma, but rather an AI that is ******* insane.

The optimism is based on hardware trends. But it lacks the realism of actually trying to put something together.

Yes, that there is the tragic flaw in the spate of comments about how we will soon (or already can) match the computational power of the human brain. AI is not a quantity problem, it's a quality problem.
 
If your computer can play chess better than you, it's because its software is optimized for that.
Whereas I am optimized for... something else, I suppose. Losing at chess, perhaps. Following that logic, perhaps what would satisfy you more than a computer that played chess well is one that played it badly -- and then got all ticked off and threw the pieces across the room. Of course, it couldn't just be blindly following a series of instructions leading to a call to a "tantrum" routine; it would have to genuinely care about the outcome of the game. How about one that played badly, felt like throwing a tantrum, but managed to restrain itself? Just how rich an inner life would you require such a machine to have before you would declare it to have met the test? And how would you know?
 
Yes, that there is the tragic flaw in the spate of comments about how we will soon (or already can) match the computational power of the human brain. AI is not a quantity problem, it's a quality problem.

Thank you! The most reasoned explanation for why simply throwing more speed and hardware at the problem (or odd data as in some stupid A.I. experiments) will not cause a human brain to be an emergent property.
 
Cyborgification.

Cyborgification and nanomachines.

From a certain perspective, complex organisms are just societies of mutually-supporting nanomachine colonies.

As we make more progress in prosthetics, more progress in tapping into and extending the human nervous system, we will get closer and closer to the day when human brains need no longer be tightly bound to human--or humanoid--bodies.

When a human brain in a jar walks the street in a full-body prosthetic, it will look very much like strong AI--that is, a "robotic" body with human-caliber thought processes guiding its every move and making its every decision. Of course, it won't really be "artificial" intelligence.

And once you're putting human brains in jars and attaching them to full-body prosthetics, the possibilities are endless. Why a humanoid body? Why not a battleship? Why not an aircraft carrier? Why not extend the intelligence's "body" to include swarms of fighter drones or robotic tanks?

And once you're doing full-body prosthetics, what about other brain-in-a-jar options? Does it need to be natural-born? Can it be developed in a vat, fed sensory inputs for "education", and transferred to a purpose-built body when it's fully matured?

Does it even need to be a "natural" brain at all? Can it be some other brainlike community of cellular automata? Custom-built nerve cells, or custom-cultured on specialized matrices? Optmized for this function or that function? What about vat-grown brains derived from cats? Or bears?

That's where strong "artificial" intelligence will come from: the same place as strong artificial legs come from: greater and greater success at connecting with and improving upon the natural organism.

Are integrated-circuit logic gates etched on silicon wafers, and clever clever binary-code algorithms part of the road from here to there? Maybe. But I think today's supercomputers aren't that much closer to strong AI than the abacus. I think what is closer to strong AI is the robotic arm controlled by electrodes stuck to a monkey's skull.
 
Just how rich an inner life would you require such a machine to have before you would declare it to have met the test? And how would you know?


There has actually been a lot of discussion on this over the years, starting with AI pioneer Alan Turing's excellent concept of what has come to be known as the Turing test -- basically, you have an human interrogators interview both a human subject and an AI computer, not knowing which is which. If the human interrogators can't tell the difference in a statistically significant number of trials, the computer can be said to have passed. See http://en.wikipedia.org/wiki/Turing_test
 
Thank you! The most reasoned explanation for why simply throwing more speed and hardware at the problem (or odd data as in some stupid A.I. experiments) will not cause a human brain to be an emergent property.


I have visions of people finally building a computer with the computing power of a human brain, turning it on, and breathlessly typing in something like, "Hello, what is your name?" and having the computer respond SYNTAX ERROR. :)
 
Yes, that there is the tragic flaw in the spate of comments about how we will soon (or already can) match the computational power of the human brain. AI is not a quantity problem, it's a quality problem.
It's both. You can't solve the quality problem if you haven't solved the quantity problem.
 
Whereas I am optimized for... something else, I suppose. Losing at chess, perhaps. Following that logic, perhaps what would satisfy you more than a computer that played chess well is one that played it badly -- and then got all ticked off and threw the pieces across the room.

No, I'm not concerned about anthropomorphism all that much.

Of course, it couldn't just be blindly following a series of instructions leading to a call to a "tantrum" routine; it would have to genuinely care about the outcome of the game. How about one that played badly, felt like throwing a tantrum, but managed to restrain itself? Just how rich an inner life would you require such a machine to have before you would declare it to have met the test? And how would you know?

When its actions are somewhat independent of its explicit programming and dependent on its understanding that its actions are its own.
 
I'm in graduate school studying cognitive science and IMO we need to understand the brain much better before we can create strong AI. My main interest is in "concepts" (mental representations of categories), memory and similarity assessment, but it seems like only the surface of these things has been scratched. The same could be said for affect, attention, etc. One thing I find really interesting is that our "conscious" "working memory" (i.e. trying to remember a string of numbers short term or engage in deduction/calculation 'in our head') seems to have such poor capacity and computational power, but it interacts with and draws on signals from the 'unconscious processing' aspect of the brain which has very high computational power. That lowly 4-7 chunks of information capacity for working memory seems to be an important part of what makes us more intelligent than other animals for whom it's even smaller.

Someone made a good comment about the fly. It's silly that people are trying to make programs to beat the Turing test, reflecting one of the highest forms of animal intelligence, when we can't even simulate insect intelligence properly.
 
Last edited:
When its actions are somewhat independent of its explicit programming and dependent on its understanding that its actions are its own.
Again: How would you know what its actions were based on? Do you have perfect knowledge of what your own actions are based on? Surprisingly sophisticated and flexible behavior can emerge out of relatively simple and quite rigid sets of rules, and it isn't necessary for the programs executing them to possesss any high-level "understanding" of those rules, or the reasons they exist, or the real-world consequences of following them (or of ignoring them; or even of what it would mean to ignore them), or even that they are rules. How can you be certain that humans are not essentially automatons themselves, following sets of rules which, though much larger and more complex, are ultimately just as rigid?
 
I have visions of people finally building a computer with the computing power of a human brain, turning it on, and breathlessly typing in something like, "Hello, what is your name?" and having the computer respond SYNTAX ERROR. :)

lol. That's why I hate the Turing test. We speak drawing on a rich history of interacting with our environment--we can speak meaningfully to others, because they've interacted with the environment in similar ways and process information in similar ways. Words convey external categories and have strong associations with emotion and memory. How is a PC style computer style computer supposed to talk in words that for the most part have zero experiential relevance for it? To really conquer the Turing test it would need to be much more intelligent than a human. Or it would need to be able to interact with the external environment through a robotic body with advanced sensory processing and spend a great deal of learning time.
 
Computer chess has reached super-human game-play

I've always been puzzled about why people think this is impressive. All it really means is that computers are much faster at searching the available space of moves than humans are. But faster isn't more intelligent. Consider, for example, the game Go. In one sense it's a simpler game (there's only one kind of piece), but computationally it's much harder, because the number of possible moves is much larger. That is a killer when it comes to brute force search algorithms. And not surprisingly, nobody has made a computer Go player which can compete with top human players.

But what will it take for a computer program to formulate its own questions and feel compelled to answer them? What will it take for a computer program to recognize it's own existence and interact with humans and other sentient beings?

We might never figure that out. I'm of the opinion that no intelligence is ever smart enough to understand itself. So we might never be able to make computers which are smarter than we are. Faster? Sure, we already have that. More knowledgeable? Absolutely. Smarter? I don't think we'll succeed. I don't think we'll even succeed at "as smart".

Someone working on the Blue Brain project claimed that we'd have human level intelligence in 2019: "It is not impossible to build a human brain and we can do it in 10 years".

Pipe dream. If it ever happens, it'll take much more than 10 years.

Given powerful enough hardware, I agree with that sentence.

I don't. The problem isn't ultimately a hardware one, it's a software one.

Can a set of machine learning algorithms with a central "director" experience consciousness?

We really don't know, since we don't even really understand what consciousness is.
 
I'm not a tech person at all so i don't know much about this subject, however, I recently read something interesting in the book "Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel" by Michio Kaku on this subject. He said a really big problem right now with robot technology is object recognition. That scientists can't figure you how to make a robot with AI PERCEIVE what an object is. So a military robot or a mars rover or what have you will be able to sense objects are there in order to maneuver around them or interact with them, but the robot doesn't actually recognize what that object IS. He wrote that until
they figure out how to make robots capable of being able to acurately interperet their surroundings and distinguish objects as what they are things as opposed to just "objects" then we will be limited in robotic/AI technology.
 
If your computer can play chess better than you, it's because its software is optimized for that. It can't play master-level chess and attend to your children at the same time.

We don't even have to go beyond chess to see the limitations of computer chess. If I whip out a chess set you've never seen before, you can probably figure out which pieces are which pretty easily. Your computer can't even do that.
 
I'm in graduate school studying cognitive science and IMO we need to understand the brain much better before we can create strong AI. My main interest is in "concepts" (mental representations of categories), memory and similarity assessment, but it seems like only the surface of these things has been scratched. The same could be said for affect, attention, etc. One thing I find really interesting is that our "conscious" "working memory" (i.e. trying to remember a string of numbers short term or engage in deduction/calculation 'in our head') seems to have such poor capacity and computational power, but it interacts with and draws on signals from the 'unconscious processing' aspect of the brain which has very high computational power. That lowly 4-7 chunks of information capacity for working memory seems to be an important part of what makes us more intelligent than other animals for whom it's even smaller.

Someone made a good comment about the fly. It's silly that people are trying to make programs to beat the Turing test, reflecting one of the highest forms of animal intelligence, when we can't even simulate insect intelligence properly.

lol. That's why I hate the Turing test. We speak drawing on a rich history of interacting with our environment--we can speak meaningfully to others, because they've interacted with the environment in similar ways and process information in similar ways. Words convey external categories and have strong associations with emotion and memory. How is a PC style computer style computer supposed to talk in words that for the most part have zero experiential relevance for it? To really conquer the Turing test it would need to be much more intelligent than a human. Or it would need to be able to interact with the external environment through a robotic body with advanced sensory processing and spend a great deal of learning time.

Cornsail, you being an actual student of cognitive science and not just a computer geek with a metaphysical conviction that computers are smart just because they have processing power, I salute your critical thinking assessment of your field.

I would like to repeat that much more critical thought will need to go into the whole question of AI, not just muscle.
 
Evolution.
This was my first thought as well.

And basically, I think it would be rather "simple". Just take a computer with a lot of empty hard-drive space, set it up to play endless tic-tac-toe with itself (for say, half a million years perhaps), and check on it occasionally to see if evolution takes over and it learns the great Joshua lesson or not. If it does, it will figure out eventually what to do with the extra harddrive space and begin writing it's own programs as it "learns new stuff". Then it's ready to tackle global thermonuclear war correctly, right? You've got yourself a Terminator that can also be trusted to babysit :)

When its actions are somewhat independent of its explicit programming and dependent on its understanding that its actions are its own.
This is precisely why I don't think "true AI" will ever exist in the way someone expects it to. I think the most that can be hoped for is a simulation that creates the illusion.

Can you name one thing that exists in the universe that operates independent of it's "programming" in any form or fashion whatsoever?
 
The problem with evolution (e.g.: genetic and evolutionary programs) is that, if they are to exploit the strengths of evolution, they will be undirected (as in the 'hand of god' motif). Okay. We could apply some survival criteria so as to (hopefully, mind you) make it more amenable for intelligence or sentience to evolve because it has more favorable survival qualities for the criteria. Nonetheless, can we end up with a human-like sentience under these circumstances? Probably not - or not for a very long time from now (many decades or centuries). Remember that 'evolution' here is going to be both internal and external. That is, the hardware and software as well as the 'evolving' algorithms. This won't be a one-off run. It will be many millions of runs, tweaking and upgrading and fixing, with the involvement of many different people and groups over time. Maybe something akin to the DARPA autonomous vehicle competition.
 
This is precisely why I don't think "true AI" will ever exist in the way someone expects it to. I think the most that can be hoped for is a simulation that creates the illusion.

If that is true, then consciousness is also an illusion to us.

Can you name one thing that exists in the universe that operates independent of it's "programming" in any form or fashion whatsoever?

Besides evolution, not really. But I'm not suggesting anything operates independent of its programming. What I am saying, however, is that some sentient beings seem to intentionally adjust to their environment. That they have some level of self-awareness, and that this self-awareness influences their actions.
 

Back
Top Bottom