• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What will it take to create strong AI?

Richard Masters

Illuminator
Joined
Dec 27, 2007
Messages
3,031
Ever since I watched movies like Terminator 2 when I was younger I've been obsessed with Artificial Intelligence. Having a computer solve your problems - examine your DNA and give you genetically-tailored medical advice, solve any problem that requires strategy or thought, generate storylines that appeal to you and turn them into realistic 3D-animated movies or video games - is incredibly appealing.

Sophisticated matrix manipulation won the Netflix Prize, and there are other applications for Singular Value Decomposition besides choosing movies from sparse data, such as compressed sensing; Neural networks can play an impressive game of 20Q; Computer chess has reached super-human game-play; Evolutionary algorithms can create optimized antennas for NASA.

But what will it take for a computer program to formulate its own questions and feel compelled to answer them? What will it take for a computer program to recognize it's own existence and interact with humans and other sentient beings?

Someone working on the Blue Brain project claimed that we'd have human level intelligence in 2019: "It is not impossible to build a human brain and we can do it in 10 years".

Given powerful enough hardware, I agree with that sentence. From a materialistic point of view, if we can simulate the brain to a neuronal level, then we can build a software-based human brain.

But what if we want to build strong AI some other way? What minimal organizational structure is required for an algorithm, parallel or otherwise, to experience consciousness?

Can a set of machine learning algorithms with a central "director" experience consciousness? If natural selection is responsible for our brains, then surely there are optimizations to be made and better ways to create an intelligent being.

What do you think it will take to create strong AI?
 
You're asking some very difficult but interesting questions. I don't think anyone knows what it would take for a computer system to exhibit a "need" to learn more.

The best option I would think is to create a large neural network as thats what our brains are basicly and just keep experimenting on whats possible and not possible. Sooner or later we might have a system not unlike our own brains.

About the 2019 prediction I'm not sure it's such a good idea to make such predictions. we might have some incredible AI then and we might not. Better to just stick to what we know now and try to develop it one step further at a time.
 
"If we had some bread, we could have a baloney sandwich, if we had some baloney".

Your sarcasm kind of misses the point. The computational power in a home computer is expected to reach that of the human brain in 2025.


Are you saying the question is unanswerable, or that the answer would have no meaning?
 
Intelligence is a hard thing to define. Crows and Octopuses appear intelligent, suggesting brain size is less critical than brain structure .
But I have yet to see a machine display any hint of intelligence in the sense of what I've seen in crows.
I doubt that awareness could be predicted , purely from a 100% accurate map of the brain. I equally doubt we can predict what may emerge from machines. But consider the following:-
Telepathy.
Remove viewing.
Telekinesis.

Machines are already capable of all three.

Shared hardware, shared software, shared protocols; that's telepathy just as much as me and a total stranger making eye contact across a dentist's waiting room while overhearing a receptionist on the phone make a dreadful gaffe. We saw each other wince. We each knew exactly what the other thought.
The internet does that all the time.

Computers survive- aye and reproduce, because they make people happy. They don't need intelligence.
They certainly don't need conscious intelligence as it would slow them down dreadfully, just as our conscious intelligence is vastly slower than our unconscious data processing.

To evolve intelligence, we must cut machines free from the need to serve people- and let them evolve to meet their own needs. That might produce all manner of unexpected results. Intelligence might be one, but I'm actually inclined to doubt it.
I think something more akin to hive awareness is more likely. Global awareness- more like a sensory system than a mind.

Wild speculation, obviously.
 
I was halfway to a graduate degree in AI before I dropped out (on realizing that nobody was making any real progress).

There's kind of a running joke in AI research that real, true AI is always ten years away. So I had to smirk at the 2019 prediction in the OP.

There's another running joke that it's like climbing trees to reach the moon. That's where I sit. We are nowhere close, and aren't likely to be anytime soon. For some reason other AI researchers get unreasonably angry when I've made comments like that. The optimism is almost cult-like, and isn't really based on anything.
 
Your sarcasm kind of misses the point.
Does it?

The computational power in a home computer is expected to reach that of the human brain in 2025.
The one I've got sitting in front of me right now exceeds the computational power of my own brain by orders of magnitude -- in certain areas. It already "recognizes its own existence" (in a way) and "interacts with humans" (sort of). You've got some assumptions that need unpacking.

Are you saying the question is unanswerable, or that the answer would have no meaning?
Yes.
 
A genetic algorithm that evolves neural networks (or a network of neural networks) could be an avenue. The main problem would be to define an appropriate fitness function. And it would also need huge computational ressouces.
 
I was halfway to a graduate degree in AI before I dropped out (on realizing that nobody was making any real progress).

There's kind of a running joke in AI research that real, true AI is always ten years away.
Well, that's partly because we keep redefining what real, true AI is. Which is partly because we didn't have a good definition in the first place.

But to achieve human level intelligence you'll need human level processing capacity - which is achievable now, though still pretty expensive - and a much better understanding of the processes involved than we have now. Or a much more powerful computer so that you can simulate the brain at a lower level where we have a better understanding.

There's another running joke that it's like climbing trees to reach the moon. That's where I sit. We are nowhere close, and aren't likely to be anytime soon. For some reason other AI researchers get unreasonably angry when I've made comments like that. The optimism is almost cult-like, and isn't really based on anything.
Well, the computational requirements are close to being met, so that's one issue solved. Still leaves a lot of work to do, of course.

I'd say it's extremely unlikely that we'll have anything like a human level AI in 10 years. Even 20 seems to be a stretch, but that far out it's hard to predict anything other than that we'll still have vi and emacs and wars between their respective users.
 
The main problem might be recognising it.
For all we know, machines already are conscious, but the output we are interested in has absolutely nothing to do with that consciousness - in the same way that we are conscious of counting to three, though that process is not the process that we are aware of as being aware.

These are not the processes you are looking for. Move along...:)
 
I was halfway to a graduate degree in AI before I dropped out (on realizing that nobody was making any real progress).

There's kind of a running joke in AI research that real, true AI is always ten years away. So I had to smirk at the 2019 prediction in the OP.

There's another running joke that it's like climbing trees to reach the moon. That's where I sit. We are nowhere close, and aren't likely to be anytime soon. For some reason other AI researchers get unreasonably angry when I've made comments like that. The optimism is almost cult-like, and isn't really based on anything.

This is my position almost exactly. While my interest in AI wasn't part of a degree course, I did dabble extensively in it for a decade or so. Some of the texts that I used were college-level textbooks - not your 'AI for dummies' crap.

Having no idea how intelligence arises or its intrinsic characteristics makes aiming for it sort of like trying to shoot the purple barglesnorfer without knowing what one is or looks like. In other words, how can we create autonomous, sentient, intelligent machines when we couldn't do the same for a fly in a lab. Have we? No. Make a fly with some form of intelligence we would define as well beyond its normal instinctual activities and we can make a computer that thinks. Otherwise it is all blowing smoke up the posterior orifice.
 
A genetic algorithm that evolves neural networks (or a network of neural networks) could be an avenue. The main problem would be to define an appropriate fitness function. And it would also need huge computational ressouces.
And that's the rub. BFD if my desk computer has my brain's computational power. Our neural networks evolved over a few billion years with trillions of generations across quadrillions of individuals (numbers I made up, but they seem in the ballpark). All of that produced only 1 extant species with our intelligence (I'm willing to consider Neanderthals as a non-extant species that had roughly our cognitive power). Reproducing that evolution would require all of the world's computational power devoted to the task for 10^(big single digit..small 2 digit) years.


Of course, evolving a network on the computer could be directed towards intelligence. Fitness functions would not just be surviving in some niche, but surviving in a way that requires symbolic processing of the environment.

But all of this, including what I said, is baseless speculation. I imagine we'll have several breakthroughs that we arent predicting, breakthroughs that will either make it much easier for us to produce AI, or perhaps much harder (say a result that shows some important aspect of the brain is noncomputational) . I'm not counting as AI something like duplicating a brain - that is an interesting but different problem.

I suspect we will eventually have various emergent intelligences appear as we develop complex interacting systems. Intelligences that may not bear much relationship to our minds at all. But who knows?
 
Last edited:
Does it?

The one I've got sitting in front of me right now exceeds the computational power of my own brain by orders of magnitude -- in certain areas.

Sure, I'm referring to raw computational power. If your computer can play chess better than you, it's because its software is optimized for that. It can't play master-level chess and attend to your children at the same time.

It already "recognizes its own existence" (in a way) and "interacts with humans" (sort of). You've got some assumptions that need unpacking.

That's not what is meant though. Qualifiers like, "In a way" and "sort of" make your statement true, but does the computer ever philosophize about its existence? Does it care for the welfare of other beings?
 
I was halfway to a graduate degree in AI before I dropped out (on realizing that nobody was making any real progress).

There's kind of a running joke in AI research that real, true AI is always ten years away. So I had to smirk at the 2019 prediction in the OP.

I understand what you are saying, but why would a simulation of the brain fail? (Other than it lacks the typical means of interaction with the outside world - which suggests we might end up with a brain in a coma).

There's another running joke that it's like climbing trees to reach the moon. That's where I sit. We are nowhere close, and aren't likely to be anytime soon. For some reason other AI researchers get unreasonably angry when I've made comments like that. The optimism is almost cult-like, and isn't really based on anything.

The optimism is based on hardware trends. But it lacks the realism of actually trying to put something together.
 

Back
Top Bottom