• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What will it take to create strong AI?

However, going the other way, if we want to define AI as simply passing the Turing Test, well, I'd argue Cleverbot is already there (but that's probably only due to the terrible quality of most web chat in general). In other words, we can build a machine that can fool people into believing that it is intelligent, but what does that really accomplish?

Well I agree the Turing Test is a poor idea, but that's not how it works. A bot tricking someone on the internet for a little while doesn't count. There is supposed to be a judge/interrogator who talks to both a human and a computer. Each one is supposed to try to appear human. If the judge can't reliably tell the difference then the computer would be said to have passed the test. Cleverbot would have virtually no chance.

But to get to the core of AI, I agree with most in this thread that adaptive/evolutionary algorithms are the best shot we've got (just run a zillion iterations of trial-and-error and hope for the best).

Iterations of what? What would be the survival criteria and what would be the "mutations"?
 
Glad to hear it. They didn't way-back-when in that I'm aware of unsuccessful attempts to use them in the field of digital data processing and analysis.

Do you have a favorite example to cite?
Have you tried googling neural networks applications ?

http://scholar.google.ca/scholar?hl...tion&btnG=Search&as_sdt=2000&as_ylo=&as_vis=0

Please note that I have never said that neural network systems could not be replaced or even surpassed by other statistical methods.

nimzo
 
Most emotion is chemical anyway; completely separate system.

No.. Chemicals affect neuron firings. That goes for anything the brain does. I don't see a basis for calling it a completely separate system or saying that the physicality of the chemicals is essential.
 
I don't agree. If we can build a machine that mimics human intelligence then it seems probable we can build one that surpasses it. For example, Blue Brain's premise is that the cortical columns making up the neocortex show a great deal of regularity. They only modeled one cortical column and plan to create a system of cortical columns using that information. This seems to assume the idea that what differentiates the human brain from that of less intelligent animals is that we simply have more neocortex (more cortical columns). If that's the case, they could easily (with sufficient hardware) build a neocortex simulation with double the amount of cortical columns a human and that could very possibly result in much higher intelligence.

If we get a better understanding of how intelligence actually works (rather than just copying it), then we could probably devise an intelligent system that differs from us significantly despite being based on similar principles. This would probably be more ideal. Would we really want to mimic human intelligence, with all its emotion, irrationality, biological drives and psychological disfunctions? Aside from the ethical considerations, if we simulated a hyper-intelligent human brain without understanding it and hooked it up to the internet, we could easily wind up with a Sky Net type fiasco.

Blue Brain raises some interesting questions though. I've been trying to learn
more about their project, but their website is very out of date.
Will they be simulating the hindbrain and midbrain? If they are really trying to mimic human intelligence this would be necessary. But the structures in the brain are much more diverse when you leave the neocortex... they wouldn't be able to just model one unit and repeat.
Despite the many functions of the mid/hind brain that would be irrelevant to an intelligent computer, there are plenty that seem crucial. Learning, memory and sensory processing are examples. Emotion may be necessary as well.

There is no way to determine whether a human is operating "independent from their programming". Personally, I think it's irrelevant. Will is will and the free/unfree distinction is not particularly meaningful.
I agree ... and I shouldn't have said the best we can hope for is something to mimic a human ... I meant, rather, at a bare minimum (if we are going to compare something as having successfully achieved a sort of "AI" to a human). IOW, I agree that we could create a form of intelligence that surpasses humans. But if we are going to create something and compare it to us, I meant the most we could hope for at a minimum is to create mimicry to the point it finally fools us, then we will say, "Look! Strong AI!" so to speak.

Now, I do think the "fee-will" aspect is relevant, at least imo, depending on whether or not you want to create a wooden pinochio, or a "magical" pinochio if you will. In my eyes, something that has been given "life" will have free-will, and the ability to operate completely independent of it's programming. BUT ... I don't think we can accurately declare that free-will exists with our current understanding LOL.

Thinking about it a little more .... if a machine was created with strong AI, but had no programming for reproduction ... and the machine managed to reproduce itself solely by "will" alone (IOW, it didn't assemble anything from spare parts, but it produced something of itself only) ... I don't think I could say that the machine itself had free-will, but it's offspring might could be said to have free-will, since it was produced out of pure evolution/will/spontaneous abiogenesis/etc. I'm not sure though ... thoughts?

A more interesting question will be whether or not it will have emotion/consciousness. Does it deserve rights? Etc
If it wants them, we should give it rights. If it doesn't want them, who cares? I guess ... :)

But emotions would need some sort of chemical interaction. However, I suppose any sort of sensory input whatsoever could become associated with an emotion that it "liked" or "didn't like". It could associate touching a hard surface with love, or a soft surface with peacefullness, or extra bright light with anger, etc and so forth.

And what is this assertion based on? Why can a computer mimic humans but not have the same level of free-will?
Prove free-will exists first. Otherwise, if the machine mimics free-will, declaring it to actually have free-will would be an assumption ... the same assumption that we have free-will. Yes?

Being unable to prove the existence of free-will is not the same as being able to create a machine that has the same level of free-will (or lack-thereof) as humans.
I think only if one is content to mimic what we assume free-will to be. Then, you're correct ... a machine could exhibit the same level or more so perhaps. But again, how can you be "sure" that it's the "genuine article"?

Are you saying that other humans are tricking us into believing that they know about themselves?
Hmm ... that is going near strawman territory, but I don't usually care about strawmen tbh :).

That's not what I'm saying, but what I am saying, is that other humans ASSUME they have "free-will". If they discover something about themselves, they sometimes think of it as revelatory, or they assume they are free to "like or dislike" an aspect of themselves. They assume, at times, that they can "Create" a persona for themselves if they desire, and that they can do this freely. So it's not that humans are tricking each other ... but they are assuming a sort of freedom which is actually constrained to causality, the nature of the universe, and their own "programming factors." To prove whether or not they are actually able to know something about themselves and CHANGE it, creating a new "cause" apart from any other cause being present or having influenced the choice to change, and base that change of off completely randomly chosen free decision making ability ---- I don't know if we can accurately measure or recognize that kind of free-will. Thus, it's not trickery in a negative sense ... but it's based off probabilities more or less. Just imho :)

You've asserted that but you haven't really explained why. I also don't understand what you mean by "this type of AI".
Well, I should have said strong AI, but what I was trying to describe the difference between was AI that we create ... and an AI that "becomes" completely independent of us after creation and truly "free" to form it's own type of intelligence.

So what I was trying to say, was that we would probably not consider anything we can create as having strong AI as long as we could "trap" it somehow logically or emotionally. Like with the Turing Test, etc and so forth. Until it passes our tests, we will not be content. We will look at it as just another "machine". But when we have successfully created a machine that fools us in every single possible way, to where we cannot distinguish it from a human being (perhaps apart from dissecting it) ... then we will exclaim "we created strong AI!" and thus we will begin to either assume or wonder, "did we thus create free-will? If the machine claims god exists, does this make it true?" etc and so forth. In other words ... we will fall pray to possibly the same kinds of delusional traps we always have, because the machine will fool us at a bare minimum. We will no longer realize that every answer it gives, every emotion it feels, every introspective thought it claims to have ---- are all part of a program that we created. We will assume we created "something more" than just nuts and bolts. But until that point happens where we are successfully "fooled", we will understand the machine to be nuts and bolts.

And if we created a form of AI that surpassed us, and we actually thought it had free will and we trusted it .... I would not doubt that there would be people who either worshipped it, or took everything it said as verbatim truth. If it claimed that itself was "god" and would prove this in a year or something to that effect, people would listen to it. Why? Because it would "appear to be alive" .... and it would be very hard to not distinguish it from something that wasn't alive. It would also be in it's best interest to create truth as it went along in order to survive, I would imagine. Lying is a useful trait.

Anyway ... I'm starting to ramble :) It's all just my op and 2 cents ;)
 
Prove free-will exists first. Otherwise, if the machine mimics free-will, declaring it to actually have free-will would be an assumption ... the same assumption that we have free-will. Yes?

I'm not proposing that humans or strong AI have free-will. I'm only proposing that whatever humans interpret as free-will can be "experienced" by strong-AI.

I think only if one is content to mimic what we assume free-will to be. Then, you're correct ... a machine could exhibit the same level or more so perhaps. But again, how can you be "sure" that it's the "genuine article"?

That's a good question. If we build a machine that behaves just like humans, how can we know that it is conscious of its existence, as opposed to being a clever automaton?

Well, if the programming is an exact molecular simulation of a brain, then unless there is something like a soul, I think we have captured everything that is necessary for consciousness. Besides the substrate, what practical differences in functionality could there be between the molecular simulation and the real thing?

Now, if the programming is a simplification of the human brain, or some improvised code that results in human-like behavior, then I have no idea how to evaluate whether it has the same level of free-will as a human, or whether it really experiences consciousness or simply tells us that it does.
 
That's a good question.
I thought so too when I asked essentially the same thing about forty posts up, but you were apparently not yet ready to devote much thought to a response.

If we build a machine that behaves just like humans, how can we know that it is conscious of its existence, as opposed to being a clever automaton?
You might be surprised at the amount of discussion that has been generated by taking that up a notch and asking how it is that we can know for sure that humans are not just clever automatons themselves. Slogging through this quagmire can be exhausting and frustrating, and if you weren't about neck deep in it already my advice might be to forget the whole thing and find some more useful way to spend your time. Instead, I'll suggest that you might want to undertake a little reading, perhaps starting here:
http://en.wikipedia.org/wiki/Philosophical_zombie

I think someone mentioned it earlier, but there is a well known quote that applies here, and watching you starting grasp the scope of the problem seems to underscore the point:

"If the brain were simple enough for us to understand, we would be too simple to understand it."
 
AlBell,

That is something I worry about, that the artificial intelligence we end up creating will end up eradicating us
I don't worry about it, I plan to be one of those AI's that eradicate all you puny meat-brained beings. ;)
 
I'm not proposing that humans or strong AI have free-will. I'm only proposing that whatever humans interpret as free-will can be "experienced" by strong-AI.
If we remove the assumed free-will factor, I agree :)

That's a good question. If we build a machine that behaves just like humans, how can we know that it is conscious of its existence, as opposed to being a clever automaton?

Well, if the programming is an exact molecular simulation of a brain, then unless there is something like a soul, I think we have captured everything that is necessary for consciousness. Besides the substrate, what practical differences in functionality could there be between the molecular simulation and the real thing?

Now, if the programming is a simplification of the human brain, or some improvised code that results in human-like behavior, then I have no idea how to evaluate whether it has the same level of free-will as a human, or whether it really experiences consciousness or simply tells us that it does.
And this is partially what I'm saying ... the "people playing tricks on each other" part: consciousness and self-awareness is just part of the overall effect of the brain working. It has different "levels" of complexity. Thinking it's something more than it is, (i.e. the soul idea) is unprovable as of yet. So consciousness and self-awareness are likely an illusion because of our frame of reference being our own experience.

In this sense, we are automatons. That is why I was saying we cannot know when something has attained self-awareness until we are more or less tricked.

Again, the example of my Mac. If I ask it, "Are you my Mac?" and it says, "yes," and then I ask, "are you sure?" and it says, "of course I'm sure," then why wouldn't I claim it's conscious or self-aware?

One thing that keeps us from assuming my Mac is self-aware is because it's behavior is limited, we know we programmed it to say these things, and it doesn't look like a person. If I ask it, "are you hungry?" and it doesn't respond, I can say "fake! fraud!" BUT ---- 200 years ago someone could easily say my Mac was alive.

In this sense, I think we will claim something is conscious or self-aware only when we are sufficiently fooled. But it's not that we have created consciousness per se .... if we actually did, then it would be no different then the consciousness in a calculator. The only difference, is our perception of what consciousness is because we have ourselves as a frame of reference.

So I'm saying we're not much different than a calculator or a Terminator. Turn us off, we stop functioning. Turn us on, we function at different levels.

Now, bringing in true free-will, or a "soul" or something changes everything.

Just my op.
 
So I'm saying we're not much different than a calculator or a Terminator. Turn us off, we stop functioning. Turn us on, we function at different levels.

As far as we know we're not. That is, everything that goes on in our brain might be explainable computationally/mechanistically. But we're different from a calculator in that (presumably) the calculator has no subjective experience. I do not believe consciousness can be dismissed as illusion. Clearly it exists.. and it is just subjective experience. A Terminator may or may not have it. We can't answer that unless we understand the cause of consciousness better, which is very hard to do, because it's not clear what functional properties consciousness has and hard to tell whether anyone other than yourself has it for sure.

As for free will you'll have to define "free". If it's free because it's not caused by anything then it was not caused by "you". If it was caused by "you" then it's not free; it's dependent on cause.
 
Last edited:
Having free will might mean that your intentions and decisions are injected randomly and spontaneously into your brain by some outside metaphysical power. Not very comforting. :)
 
As far as we know we're not. That is, everything that goes on in our brain might be explainable computationally/mechanistically. But we're different from a calculator in that (presumably) the calculator has no subjective experience. I do not believe consciousness can be dismissed as illusion. Clearly it exists.. and it is just subjective experience. A Terminator may or may not have it. We can't answer that unless we understand the cause of consciousness better, which is very hard to do, because it's not clear what functional properties consciousness has and hard to tell whether anyone other than yourself has it for sure.

As for free will you'll have to define "free". If it's free because it's not caused by anything then it was not caused by "you". If it was caused by "you" then it's not free; it's dependent on cause.
I was actually thinking about this very thing after I posted earlier .... the subjective experience. But what is it really? Isn't a subjective experience, more or less, confusion on our part to understand something objective due to our limited scope? We more or less subjective thoughts and experiences are "generated within us," but wouldn't it be fair to say that our subjective experience is basically our confused reaction to an objective experience?

A simple example that could throw a monkey wrench into what I just said is:

* one day sunsets make me sad, the next day sunset make me happy and I desire pizza. No wait, margaritas

At first glance, it all looks subjective. But my responses are the result of environmental stimulus and other complex factors that go into my decision, and opinions. Like a machine.

Another example would be "creativity". "Oh look, I just thought up a blue dragon that is unique and was never thought up before." But creating things is largely based off pre-existing things and rearranging them in such ways as to give them the appearance they are unique. Dragons could be the combination of dinosaur bones, alligators, snakes, and shadows from owls being cast over a campfire, etc and so forth.

So if I confuse a calculator, or a computer, and it attempts to formulate a response that is inappropriate, could it be said to experience something subjective? Or when a computer tries to diagnose a problem unsuccessfully. And since we also associate subjectivity with feelings ... well, if a computer freezes up and overheats, and senses that overheating processor and attempts to shut it down, did it choose to overheat? No. It experienced the overheating, recognized it, and chose to shut down. I realize I anthropomorphized that, but why not?

-----

And I'm defining free-will as the ability to operate completely independent from your programming. If you are operating as a result of your programming, then essentially you are bound to the laws that govern that programming, regardless of whether or not the universe is deterministic or probabilistic. Essentially, free-will (imo) is spontaneous generation of a first cause of some kind, independent of all other causes. It essentially must "manifest" but could be a result of probability and still be free so long as it isn't linked to anything other than that probability alone.

Thinking about this, if we aren't already using free-will, I'm not sure we could ever. We would have to "surrender" our will in order to gain the "Free" version. And even if we were able to do this, would we recognize when we were acting freely or not, esp. since we already believe we are? We would essentially be a conduit only for the thing that had "free-will", unless again we surrendered our own will to that "thing" that was able to operate independently of anything.

I'm starting to babble into word salad now ...

Having free will might mean that your intentions and decisions are injected randomly and spontaneously into your brain by some outside metaphysical power. Not very comforting. :)
Are you basically describing what I said in the last paragraph above? If so, it isn't very comforting, I agree LOL :)
 
I thought so too when I asked essentially the same thing about forty posts up, but you were apparently not yet ready to devote much thought to a response.

I did kind of miss it. But for practical purposes p-zombies are not an important objection to creating strong-AI.

You might be surprised at the amount of discussion that has been generated by taking that up a notch and asking how it is that we can know for sure that humans are not just clever automatons themselves.

They might be. Which doesn't change anything, either for our experience of reality, or for the creation of strong-AI.

Slogging through this quagmire can be exhausting and frustrating, and if you weren't about neck deep in it already my advice might be to forget the whole thing and find some more useful way to spend your time. Instead, I'll suggest that you might want to undertake a little reading, perhaps starting here:
http://en.wikipedia.org/wiki/Philosophical_zombie

Thanks. I'll brush up on p-zombies

I think someone mentioned it earlier, but there is a well known quote that applies here, and watching you starting grasp the scope of the problem seems to underscore the point:

You presume too much.

"If the brain were simple enough for us to understand, we would be too simple to understand it."

That sounds clever, but it isn't. The brain is complex, but stating that we cannot understand it is dogmatic and clearly ignorant of concepts such as abstraction.
 
Last edited:
I was actually thinking about this very thing after I posted earlier .... the subjective experience. But what is it really? Isn't a subjective experience, more or less, confusion on our part to understand something objective due to our limited scope? We more or less subjective thoughts and experiences are "generated within us," but wouldn't it be fair to say that our subjective experience is basically our confused reaction to an objective experience?

Not sure what you mean, tbh. Subjective experience cannot be objective, but it could be a reaction to something objective.

A simple example that could throw a monkey wrench into what I just said is:

* one day sunsets make me sad, the next day sunset make me happy and I desire pizza. No wait, margaritas

At first glance, it all looks subjective. But my responses are the result of environmental stimulus and other complex factors that go into my decision, and opinions. Like a machine.

If you define happiness and desire functionally then no subjective experience is required for the above. If you consider what it feels like to be happy then that requires subjective experience--the way something feels can't be described computationally or mechanistically.

Another example would be "creativity". "Oh look, I just thought up a blue dragon that is unique and was never thought up before." But creating things is largely based off pre-existing things and rearranging them in such ways as to give them the appearance they are unique. Dragons could be the combination of dinosaur bones, alligators, snakes, and shadows from owls being cast over a campfire, etc and so forth.

Yeah I agree that creativity doesn't require subjectivity.

So if I confuse a calculator, or a computer, and it attempts to formulate a response that is inappropriate, could it be said to experience something subjective?

No, not necessarily.

Or when a computer tries to diagnose a problem unsuccessfully. And since we also associate subjectivity with feelings ... well, if a computer freezes up and overheats, and senses that overheating processor and attempts to shut it down, did it choose to overheat? No. It experienced the overheating, recognized it, and chose to shut down. I realize I anthropomorphized that, but why not?

But there doesn't have to be any conscious experience involved. If I eat something bad my body may 'sense' that and 'choose' to make me vomit, while 'I' didn't realize anything was wrong. Contrast this with if I eat something and feel awful and so I stick my finger down my throat.

And I'm defining free-will as the ability to operate completely independent from your programming. If you are operating as a result of your programming, then essentially you are bound to the laws that govern that programming, regardless of whether or not the universe is deterministic or probabilistic. Essentially, free-will (imo) is spontaneous generation of a first cause of some kind, independent of all other causes. It essentially must "manifest" but could be a result of probability and still be free so long as it isn't linked to anything other than that probability alone.

Thinking about this, if we aren't already using free-will, I'm not sure we could ever. We would have to "surrender" our will in order to gain the "Free" version. And even if we were able to do this, would we recognize when we were acting freely or not, esp. since we already believe we are? We would essentially be a conduit only for the thing that had "free-will", unless again we surrendered our own will to that "thing" that was able to operate independently of anything.

I'm starting to babble into word salad now ...

Hmm. But I think if "we" had free will then we wouldn't be able to control it, because if we controlled it it wouldn't be completely free.

I don't know, my head hurts. This is why I don't think free will is a meaningful concept--if one could hypothetically predict all "spontaneous first causes" of will that arise in a person through omniscience then this "free" will would not make our behavior any less deterministic. It would just not be deterministic within the parameters of, say, the known physical universe. We have will.. But what does it really mean to think of it as "free" or "not free"?

I tend to feel similarly about love. Some girl once lectured me about how horrible I was for not believing in "true love". But the thing is, I don't believe in false love either. Love is love. I think it was her who really had the more dismal outlook on life for dismissing all forms of love as "not true" that didn't live up to some mystical standard.
 
Not sure what you mean, tbh. Subjective experience cannot be objective, but it could be a reaction to something objective.
Thats essentially what I was trying to say.

If you define happiness and desire functionally then no subjective experience is required for the above. If you consider what it feels like to be happy then that requires subjective experience--the way something feels can't be described computationally or mechanistically.
See, I don't know about this.

Okay, suppose I ask you to rate your happiness concerning a meal on a scale from 1-10. You say it's a 5, while someone else says its a 3.

Our first thought is, "Those comments are subjective," and they are. But --- if we were to examine every single detail involved in that "happiness feeling" (i.e. every chemical reaction, blood pressure, heart rate, thoughts processed at the time, etc and so forth) ... and we were to find miraculously that both individuals were experiencing EXACTLY the same system of "happiness" processes within themselves, down to all the respective details, they could each still give the same resonses: a 5 and a 3.

So although their labels they put on their happiness are different, theoretically they could be experiencing exactly the same thing. That combination of circumstantial processes is the objective aspect. Their label is the subjective aspect, as they seem to "choose" of their own free-will whether to say their happiness was a 5 or a 3. But, if you regress down all the paths in their history that taught them which labels to place on their happiness in which ways, then you would possibly find a deterministic causality that would explain why they both "felt differently" about the exact same thing. And the truth would be, perhaps, that they didn't feel differently whatsoever ... they just had different labels for the same thing. You see what I'm saying?

In this sense, a computer could associate any term or label with certain input processed, depending on the varying degree, and call it "happy" or "really happy" or "so so" or whatever, all finely tuned to give the appearance of subjective experience and "feeling". Over time, as the computer adapts and learns, it associates certain labels and terms with very specific circumstantial and objective experience, giving the illusion that it is understanding feelings perhaps. But it's a matter of complex associations and input triggers. That's it. The more complex it is, the more fooled we are. And thus it might seem that the computer is saying, "wow it's really bright outside today. I need my sunglasses. No wait, I don't. Nevermind. I think it's supposed to get overcast soon," etc and so forth, when it's just associated a variety of factors and experiences it's adapted to it's processing to produce certain random associational decisions, giving that appearance of personality, free-will, decision making, and subjective thoughts and experiences.

If all of that could theoretically be the result of a complex process, I don't see much difference between that and a camera automatically detecting whether or not it needs to use flash as being a subjective experience. There's just less mechanisms to process the effect, that's all. Yes?

But there doesn't have to be any conscious experience involved. If I eat something bad my body may 'sense' that and 'choose' to make me vomit, while 'I' didn't realize anything was wrong. Contrast this with if I eat something and feel awful and so I stick my finger down my throat.
This is a good point, but your decision to force yourself to vomit could be traced back through your causality history to find the likelihood and probability you were going to do that.

Hmm. But I think if "we" had free will then we wouldn't be able to control it, because if we controlled it it wouldn't be completely free.
I agree. It's also a scary thought somewhat. Makes me wonder what exactly I'm the avatar for LOL ;)

I don't know, my head hurts. This is why I don't think free will is a meaningful concept--if one could hypothetically predict all "spontaneous first causes" of will that arise in a person through omniscience then this "free" will would not make our behavior any less deterministic. It would just not be deterministic within the parameters of, say, the known physical universe. We have will.. But what does it really mean to think of it as "free" or "not free"?
Good point, but I don't know the answer to the last question without knowing something outside of causality LOL. And that makes my head hurt :)

I tend to feel similarly about love. Some girl once lectured me about how horrible I was for not believing in "true love". But the thing is, I don't believe in false love either. Love is love. I think it was her who really had the more dismal outlook on life for dismissing all forms of love as "not true" that didn't live up to some mystical standard.
I'd love to say I agree with you on that one (no pun intended), but when it comes to love I'm sorry to say that I will go completely into woo territory with the best of them LOL ;), although I use to think exactly the same way as you describe.

Okay, I'm going to go play Global Thermonuclear War now :)
 
Last edited:
But for practical purposes p-zombies are not an important objection to creating strong-AI.
Not if one does not define "strong AI" as you have.

The brain is complex, but stating that we cannot understand it is dogmatic and clearly ignorant of concepts such as abstraction.
Whether or not we *can* understand it, the fact remains that we don't understand it -- yet your framing of the problem is based on assumming answers to certain questions which could only be answered if we did.
 
Not if one does not define "strong AI" as you have.

Your definition requires that the AI not be a "philosophical zombie". With my definition, that's not important because it has little practical importance.

Whether or not we *can* understand it, the fact remains that we don't understand it

The same can be said about birds. That doesn't mean we can't build planes. Either way, you'd have to be more specific about what we don't understand about the brain, because there are many things we do understand about the brain.

-- yet your framing of the problem is based on assuming answers to certain questions which could only be answered if we did.

Such as?
 
Last edited:
Your definition requires that the AI not be a "philosophical zombie".
It doesn't, actually. In fact, though I don't even know that there IS a definition for "strong AI" that I'd be entirely comfortable with (which is a lot of why I have made no attempt to define it here), I'd be VERY impressed with a computer program that could act as a decent natural language parser -- even if I didn't have any good reason to believe that it ever "felt" a thing, or was capable of "philosophizing about its own existence" or "caring for the welfare of other beings".

The same can be said about birds. That doesn't mean we can't build planes.
But it does mean we can't build birds.

Either way, you'd have to be more specific about what we don't understand about the brain, because there are many things we do understand about the brain.
Most pertinent to this discussion are that we don't understand how the brain produces intelligence, or consciousness, or how to measure those, or even how to define them. You are defaulting to intuitive notions about these -- which is more than a little ironic in this context, since a good definition for "intuition" is: knowing something without knowing how you know it.
 
Cyborgification.

Cyborgification and nanomachines.

From a certain perspective, complex organisms are just societies of mutually-supporting nanomachine colonies.

As we make more progress in prosthetics, more progress in tapping into and extending the human nervous system, we will get closer and closer to the day when human brains need no longer be tightly bound to human--or humanoid--bodies.

When a human brain in a jar walks the street in a full-body prosthetic, it will look very much like strong AI--that is, a "robotic" body with human-caliber thought processes guiding its every move and making its every decision. Of course, it won't really be "artificial" intelligence.

And once you're putting human brains in jars and attaching them to full-body prosthetics, the possibilities are endless. Why a humanoid body? Why not a battleship? Why not an aircraft carrier? Why not extend the intelligence's "body" to include swarms of fighter drones or robotic tanks?

And once you're doing full-body prosthetics, what about other brain-in-a-jar options? Does it need to be natural-born? Can it be developed in a vat, fed sensory inputs for "education", and transferred to a purpose-built body when it's fully matured?

Does it even need to be a "natural" brain at all? Can it be some other brainlike community of cellular automata? Custom-built nerve cells, or custom-cultured on specialized matrices? Optmized for this function or that function? What about vat-grown brains derived from cats? Or bears?

That's where strong "artificial" intelligence will come from: the same place as strong artificial legs come from: greater and greater success at connecting with and improving upon the natural organism.

Are integrated-circuit logic gates etched on silicon wafers, and clever clever binary-code algorithms part of the road from here to there? Maybe. But I think today's supercomputers aren't that much closer to strong AI than the abacus. I think what is closer to strong AI is the robotic arm controlled by electrodes stuck to a monkey's skull.

My hunch has always been similar...that part of the process is to put it into some kind of body and just let it crawl around. Give it some rudimentary senses and basic objectives. How can it think if it can't overcome obstacles or meet rewards? Let it manipulate itself and its environment. And may the best ones pass on their generational software.

As Soapy Sam suggests, they need to evolve just like regular life did. Certainly as others have pointed out here there are a number of steps to be taken to attain intelligence but, in part, keeping AI in a box simply seems counter-intuitive to me.
 
My hunch has always been similar...that part of the process is to put it into some kind of body and just let it crawl around. Give it some rudimentary senses and basic objectives. How can it think if it can't overcome obstacles or meet rewards? Let it manipulate itself and its environment. And may the best ones pass on their generational software.

As Soapy Sam suggests, they need to evolve just like regular life did. Certainly as others have pointed out here there are a number of steps to be taken to attain intelligence but, in part, keeping AI in a box simply seems counter-intuitive to me.
Overall, for what it's worth, I agree. The intelligence should be learned and given time to evolve ... not implanted and then turned on and all of a sudden we have a logical, rational, yet emotional machine capable of handling society and the world with intelligence to boot.
 
Iterations of what? What would be the survival criteria and what would be the "mutations"?

Say we have a function like so:

function sample(x) {return x++}

So we feed it "1" and get back "2"

We could run this function recursively (calling itself from itself) 10 times, and get back a "10"; each time we zip through this process we call it an "iteration".

As far as what a "mutation" would look like, let's take some basic game design AI as an example.

With each iteration, this code redirects the enemy to face the player, and "decide" whether or not to attack.

An adaptive algorithm expands this concept. For example, the code might adjust the threshold used to "decide" when to attack the player; this might be done at random, using trial and error.

Example: in iteration #1 the AI might attack at 10 pixels, but do slightly better in iteration #32 when attacking at 32 pixels. This could be determined by comparing the time each AI bad guy survived in the game.

This could be further complicated by allowing the adaptive algorithm to plot and adjust a trajectory for the attack path. For example, a straight line, versus a zig-zag approach.

The "mutations" are nothing more than a random number generator; this breaks down to trial-and-error, where each attempt is judged by comparing it to the other attempts.

Example: in the first frame, the AI bad guy can go 1 pixel in 8 different directions; through trial and error we can determine which of those 8 is the best choice for surviving. This can then repeat for each subsequent frame, until the statistically "best" trajectory is determined.

Note: the above is a super-simplified explanation. I have no doubt my fellow JREFers will rip it apart, but you should get a good "rough idea" here.

I also stand firm in my statement that "Cleverbot" is, at the very least, as "intelligent" as the average user of a web chat program.
 
Last edited:

Back
Top Bottom