• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What will it take to create strong AI?

We don't even have to go beyond chess to see the limitations of computer chess. If I whip out a chess set you've never seen before, you can probably figure out which pieces are which pretty easily. Your computer can't even do that.

Agreed. Just wanted to use a clear-cut example.
 
If that is true, then consciousness is also an illusion to us.



Besides evolution, not really. But I'm not suggesting anything operates independent of its programming. What I am saying, however, is that some sentient beings seem to intentionally adjust to their environment. That they have some level of self-awareness, and that this self-awareness influences their actions.
EXACTLY.

In order to create AI, I assume you would first have to prove the existence of free-will. It's not that consciousness is an illusion ... rather it's a side effect. Flowing water could appear to be conscious. A beating heart laying on a table could appear to be conscious. Many cultures still believe such things have "spirits", but we "know better" because we have dissected hearts and water down to their base components. The same must be done for free-will. Free-will is the illusion that should be proved. Otherwise, the most you can hope for is a simulation of it .... which we naturally may or may not be exhibiting ourselves.

And self-aware isn't all that special. Realizing that another person or thing isn't you is pretty much all it takes. I think we have this notion that at some moment we profoundly proclaimed, "I AM !!!!", but why does it have to be that way? Identifying things, even by just the senses, is an instinctual trait. A computer identifies itself easily. Introspection is what appears complicated, but how is it any different from a cars onboard computer checking itself? It's the illusion that we are in control of it. Free-will is the base, imo, that needs to be proven. Otherwise, trying to build "True AI" is planning on too many assumed variables and premises. My 2cents :)
 
Why is consciousness considered an illusion? It's obviously something that exists. It seems more logical to consider it a reflection of our brain activity, not an illusion
 
Why is consciousness considered an illusion? It's obviously something that exists. It seems more logical to consider it a reflection of our brain activity, not an illusion
I don't think consciousness is an illusion ... it IS something that exists. But it is certain aspects of it that are illusion in that you cannot prove they aren't illusion. Self, free-will, etc. ... They are subdivisions of the same thing. It's like cutting an apple in half and saying, "now I have two oranges". They are still part of the same apple. The more you cut them, you're not going to find oranges even if you want to ... you will only find more parts of the apple.

And we can already create the illusion of the apple ... consciousness ... in a robot, computer, etc. It's just we "know better" that it is a simulation. Show it to someone from a couple of thousand years ago and they wouldn't know the difference.
 
EXACTLY.

In order to create AI, I assume you would first have to prove the existence of free-will.

I don't think that's necessary. We may not even have free-will - it may also be an illusion.

It's not that consciousness is an illusion ... rather it's a side effect. Flowing water could appear to be conscious. A beating heart laying on a table could appear to be conscious. Many cultures still believe such things have "spirits", but we "know better" because we have dissected hearts and water down to their base components. The same must be done for free-will. Free-will is the illusion that should be proved. Otherwise, the most you can hope for is a simulation of it .... which we naturally may or may not be exhibiting ourselves.

I'm not seeing the connection between determining whether there is free will and building strong AI. It might be interesting to know if there is free will, but if we can replicate basic human processes in a machine - specifically the ones we perceive as being responsible for consciousness - then actual free-will is beside the point.

And self-aware isn't all that special. Realizing that another person or thing isn't you is pretty much all it takes. I think we have this notion that at some moment we profoundly proclaimed, "I AM !!!!", but why does it have to be that way? Identifying things, even by just the senses, is an instinctual trait. A computer identifies itself easily.

A computer can give you its name, but it doesn't know it's doing that.


Introspection is what appears complicated, but how is it any different from a cars onboard computer checking itself?

I would say it's the difference between your brain controlling your blood pressure when you stand up, without your awareness, and you choosing to exhale when you stand up so that you don't faint.

It's the illusion that we are in control of it. Free-will is the base, imo, that needs to be proven. Otherwise, trying to build "True AI" is planning on too many assumed variables and premises. My 2cents :)

Whether free-will exists or not, humans appear to have what we call strong-AI. Therefore determining whether free-will really exists is not relevant to building AI.
 
Whether free-will exists or not, humans appear to have what we call strong-AI. Therefore determining whether free-will really exists is not relevant to building AI.
Well in that case you're correct ... if you're willing to accept that human's are essentially AI, then it is possible to build a machine with AI, because I think the most one can do is build a machine that mimics a human. BUT ... unless you "prove" free-will, there will be no way to determine whether the machine is operating independent from their programming at any point. The stronger the AI (I presume), the more the effect will fool you.

And even my Mac knows it's name and can simulate a weak form of introspection. And how do you know a computer doesn't know it's name? If you program it to tell you that it does, it will, right? The only reason we are "sure" that the computer doesn't know anything about "itself" is because it doesn't have enough idiosyncricies programmed in to trick us. So the most that can be hoped for with this type of AI is for the creator to create something that fools himself.

This is called marriage LOL ;)
 
If we ever do manage a strong AI, will it be microseconds or minutes before the Skynet scenario occurs?
 
AlBell,

That is something I worry about, that the artificial intelligence we end up creating will end up eradicating us
 
If we ever do manage a strong AI, will it be microseconds or minutes before the Skynet scenario occurs?
Only a strong AI would do this. A weak AI would kill itself or throw a temper tantrum for a looooooong time. A "true AI" would do neither, because it's illogical and ridiculous to behave like humans :) That's why I'm happy with my Mac the way it is ;)
 
AlBell,

That is something I worry about, that the artificial intelligence we end up creating will end up eradicating us
You know, has anyone considered that the deep space probes will "evolve" and come back in a few thousand years as Decptagons and blow us to smitherines? I can only hope Megan Fox is around to help us out.
 
You know, has anyone considered that the deep space probes will "evolve" and come back in a few thousand years as Decptagons and blow us to smitherines? I can only hope Megan Fox is around to help us out.

Don't forget about v-ger!
 
Well in that case you're correct ... if you're willing to accept that human's are essentially AI, then it is possible to build a machine with AI, because I think the most one can do is build a machine that mimics a human.

I don't agree. If we can build a machine that mimics human intelligence then it seems probable we can build one that surpasses it. For example, Blue Brain's premise is that the cortical columns making up the neocortex show a great deal of regularity. They only modeled one cortical column and plan to create a system of cortical columns using that information. This seems to assume the idea that what differentiates the human brain from that of less intelligent animals is that we simply have more neocortex (more cortical columns). If that's the case, they could easily (with sufficient hardware) build a neocortex simulation with double the amount of cortical columns a human and that could very possibly result in much higher intelligence.

If we get a better understanding of how intelligence actually works (rather than just copying it), then we could probably devise an intelligent system that differs from us significantly despite being based on similar principles. This would probably be more ideal. Would we really want to mimic human intelligence, with all its emotion, irrationality, biological drives and psychological disfunctions? Aside from the ethical considerations, if we simulated a hyper-intelligent human brain without understanding it and hooked it up to the internet, we could easily wind up with a Sky Net type fiasco.

Blue Brain raises some interesting questions though. I've been trying to learn
more about their project, but their website is very out of date.
Will they be simulating the hindbrain and midbrain? If they are really trying to mimic human intelligence this would be necessary. But the structures in the brain are much more diverse when you leave the neocortex... they wouldn't be able to just model one unit and repeat.
Despite the many functions of the mid/hind brain that would be irrelevant to an intelligent computer, there are plenty that seem crucial. Learning, memory and sensory processing are examples. Emotion may be necessary as well.

BUT ... unless you "prove" free-will, there will be no way to determine whether the machine is operating independent from their programming at any point. The stronger the AI (I presume), the more the effect will fool you.

There is no way to determine whether a human is operating "independent from their programming". Personally, I think it's irrelevant. Will is will and the free/unfree distinction is not particularly meaningful.
 
Last edited:
A more interesting question will be whether or not it will have emotion/consciousness. Does it deserve rights? Etc
 
Well in that case you're correct ... if you're willing to accept that human's are essentially AI, then it is possible to build a machine with AI, because I think the most one can do is build a machine that mimics a human.

And what is this assertion based on? Why can a computer mimic humans but not have the same level of free-will?

BUT ... unless you "prove" free-will, there will be no way to determine whether the machine is operating independent from their programming at any point. The stronger the AI (I presume), the more the effect will fool you.

Being unable to prove the existence of free-will is not the same as being able to create a machine that has the same level of free-will (or lack-thereof) as humans.

And even my Mac knows it's name and can simulate a weak form of introspection. And how do you know a computer doesn't know it's name? If you program it to tell you that it does, it will, right?

That would be a form of explicit programming that would surely be useless in this context.

The only reason we are "sure" that the computer doesn't know anything about "itself" is because it doesn't have enough idiosyncricies programmed in to trick us.

Are you saying that other humans are tricking us into believing that they know about themselves?

So the most that can be hoped for with this type of AI is for the creator to create something that fools himself.

You've asserted that but you haven't really explained why. I also don't understand what you mean by "this type of AI".
 
Cyborgification.

Cyborgification and nanomachines.

That's cheating. The whole point of AI is that the machine can "think" for itself.

Having no idea how intelligence arises or its intrinsic characteristics makes aiming for it sort of like trying to shoot the purple barglesnorfer without knowing what one is or looks like.

This is a very key point. If we're waiting for some esoteric definition of AI, like "Terminator 2" wherein the machine "learns the value of love", we may be waiting forever. Most emotion is chemical anyway; completely separate system.

However, going the other way, if we want to define AI as simply passing the Turing Test, well, I'd argue Cleverbot is already there (but that's probably only due to the terrible quality of most web chat in general). In other words, we can build a machine that can fool people into believing that it is intelligent, but what does that really accomplish?

It is also important to note the vast amount of technical analysis that has been automated via software in the past 20 years. Now, many argue that "rules" based systems are not "true" AI (as compared to an Adaptive Algorithm), but for a pragmatist, this is good enough. Example: I just coded a system to ban users after detecting a certain amount of forbidden activity; in practice all I did was implement a standing company procedure. That's not "real" AI to a researcher, but to a business owner, who can now reduce payroll because that function is successfully automated, its better.

But to get to the core of AI, I agree with most in this thread that adaptive/evolutionary algorithms are the best shot we've got (just run a zillion iterations of trial-and-error and hope for the best). Of course, "intelligence" is a fickle mistress indeed. I am reminded of this relevant anecdote:

The US ARMY was interested in developing an AI system to identify whether or not tanks were present in given recon photographs. An adaptive algorithm was created, and the AI was shown a thousand photos- some with tanks, some without. After some tweaking, the AI began working with an astounding success- 90+% accuracy. The researchers upped the ante, and showed the AI pictures where the tanks were covered by trees and other obstacles. After X iterations, the AI was succeeding again- 90+% accuracy. The researchers patted each other on the back, proud of their unparalleled success.

Sadly, it was eventually discovered that half of the photos- those with tanks- had been taken in the early evening; those without tanks had been taken in the day. All the AI had learned to do was identify the level of daylight.

A substantial blow to intelligence- both artificial and the real thing.
 
Last edited:
The US ARMY was interested in developing an AI system to identify whether or not tanks were present in given recon photographs. An adaptive algorithm was created, and the AI was shown a thousand photos- some with tanks, some without. After some tweaking, the AI began working with an astounding success- 90+% accuracy. The researchers upped the ante, and showed the AI pictures where the tanks were covered by trees and other obstacles. After X iterations, the AI was succeeding again- 90+% accuracy. The researchers patted each other on the back, proud of their unparalleled success.

Sadly, it was eventually discovered that half of the photos- those with tanks- had been taken in the early evening; those without tanks had been taken in the day. All the AI had learned to do was identify the level of daylight.

A substantial blow to intelligence- both artificial and the real thing.
Your example illustrates poor data selection, more than AI limits.

If you train supervised neural network with bad data it will not learn what you expect it to learn. The neural network will learn what you feed it.

Data selection is one of the most important step in neural network training. It this case it was poorly done.

nimzo
 
Last edited:
Your example illustrates poor data selection, more than AI limits.

If you train supervised neural network with bad data it will not learn what you expect it to learn. The neural network will learn what you feed it.

Data selection is one of the most important step in neural network training. It this case it was poorly done.

nimzo
Do you know of a case where it was "properly" done, and the system functions as expected?
 
Your example illustrates poor data selection, more than AI limits.

100% true.

I am reminded of another anecdote-

"It would be easier to train an AI if we had something smarter than a human to do it"

Forgive a lack of attribution; I've been following this so long it all mixes together in my head at this point.
 
Do you know of a case where it was "properly" done, and the system functions as expected?
This is a classical book example of using "bad" data or not enough data.

For this specific example, I don't know if the network was retrained with more diversified data. Most likely so. Is it workable on the field ? I do not know.

But of course many artificial neural network, work as they are expected to work.

nimzo
 
Last edited:
But of course many artificial neural network, work as they are expected to work.

nimzo
Glad to hear it. They didn't way-back-when in that I'm aware of unsuccessful attempts to use them in the field of digital data processing and analysis.

Do you have a favorite example to cite?
 

Back
Top Bottom