• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Sentience. How would it be recognized?

Elind

Philosopher
Joined
Aug 3, 2001
Messages
7,787
Location
S.E. USA. Sometimes bible country
This is not a new question or thought, at least not since Turin asked the question; but it keeps cropping up in movies.

What I find in reading or hearing such discussions however is that they seem to miss the simple fact that what makes humans human is the uncontrollable imperfections we all have. This was addressed in Star Trek through Data (the character) on occassion but seems to be seldom considred whenever "scientists" discuss AI.

The question is, can one have true sentience and self awareness without the uncontrolled emotions and other thought patterns that humans are prone to, not to mention simple basics like certain mortality, pain, disease and more?

Can we ever build a machine that has the same capacity to suffer, or love, and why would we consider imposing something like, say, peridontal disease on something we create as many believe gods did to us?
 
The emotion bit sounds like something we'll find out after we get AI, not as an ingredient on the way to AI.
 
What are "true sentience and self awareness," exactly?

Define your terms and you'll have your answer.
 
I think you miss my point. I suggest that it may be a critical ingredient, and I point out that I used the word "sentience" rather than "intelligence" deliberately.

Well, here's another thought. Perhaps the only sort of sentience we will recognize has emotional cues built in.

Take dogs as an example. I know how to "build" a dog. (Mommy dog and daddy dog love each other very much...) Now, do I think dogs have an "inner life" because I feel an emotional connection to them? How about other creatures we might describe as sentient, like an octopus?

And finally, suppose starfish were fully sentient, human level intelligence. However, they are keen only to do what starfish do with no impulse to relate to us at all. How am I going to detect this?

It seems to me the barrier may be relatability and communication, and those might require emotions like ours to work. I don't know much about autism and such, but isn't it sometimes described as a "different" sort of intelligence/sentience? And those are human beings.
 
What I find in reading or hearing such discussions however is that they seem to miss the simple fact that what makes humans human is the uncontrollable imperfections we all have. This was addressed in Star Trek through Data (the character) on occassion but seems to be seldom considred whenever "scientists" discuss AI.
Data was based on Isaac Asimov's non-science description of robots. Asimov actually had no idea how computers worked and only used the "positronic" brain to avoid using "electronic" which didn't sound futuristic enough. So, it was somewhat funny to see ST copying the {cough} "positronic" brain technology.

You could probably argue that ST addressed this question more directly with the M-5 computer which raised the issue of autonomous action with incomplete understanding of morality. I suppose in some ways this is reminiscent of Steinbeck's novella Of Mice And Men where Lennie too had independent action but likewise a disastrously incomplete understanding of morality. The main difference being that M-5 eventually understood its shortcomings whereas Lennie never did.

The question is, can one have true sentience and self awareness without the uncontrolled emotions and other thought patterns that humans are prone to, not to mention simple basics like certain mortality, pain, disease and more?

Can we ever build a machine that has the same capacity to suffer, or love, and why would we consider imposing something like, say, peridontal disease on something we create as many believe gods did to us?

This is one reason why Data is such a poor example for this question. Data existed with perfect morality but no emotions of any kind. Emotions were added later suggesting that they are completely independent of morality. This is quite the opposite of another ST episode where a man is turned into a robot and thereby loses his moral compass.

I would like to give you some answers but this is somewhat complex in terms of cognitive versus computational theory. Within a computational framework, it should be possible to have entirely abstract actions based on limits that are artificially created. The problem with this assumption is that it ignores the frame problem which could make such computer programs impractical because of the opposition of size versus speed. In other words, by the time you have a program large enough to handle the complexity of a human environment, the speed might be degraded to the point where it would no longer be fast enough to interact with people.

In cognitive theory the issue is self motivation versus self control. Within a biological framework I can see how this works with emotion. I'm going to say that you can't entirely avoid emotion to create a cognitive AI but I'm not sure that you would need the full human range. And, I suppose it would be theoretically possible to create a cognitive AI that was a sociopath.
 
I'm personally working on an army of steampunk-looking robots to take over the world.

I'm pretty sure that by the time you figure out if they are sentient, you'll have been annihilated. Which is kind of the point of an army of steampunk-looking robots, no?
 
I'm personally working on an army of steampunk-looking robots to take over the world.

I'm pretty sure that by the time you figure out if they are sentient, you'll have been annihilated. Which is kind of the point of an army of steampunk-looking robots, no?

If there is ever anything I have ever learned by read a few thousand Science Fiction stories it is, "Don't build a machine without an Off Switch." :eek:
 
(much snipped)
In cognitive theory the issue is self motivation versus self control. Within a biological framework I can see how this works with emotion. I'm going to say that you can't entirely avoid emotion to create a cognitive AI but I'm not sure that you would need the full human range. And, I suppose it would be theoretically possible to create a cognitive AI that was a sociopath.

This is interesting because it brings up the idea that emotions can be selected, as from a menu. I'm not sure they can be, but if they can, I want my computer to love me unconditionally.
 
In the evolution of our own consciousness, emotions came first. In the form of basic survival reactions that are still hard-wired in the more primitive portions of our brains. As we know, these primitive reactions can easily overwhelm our modern, "rational" processes....

They are part and parcel of our uniquely human consciousness. However, that's not to say that a perfectly functional consciousness could not be fabricated without emotions. If we were programming such a consciousness, and we wanted some sort of "moral" component....At that point we could no doubt build it in....Something along the lines of Asimov's laws.

Should we get to that point....
 
In the evolution of our own consciousness, emotions came first. In the form of basic survival reactions that are still hard-wired in the more primitive portions of our brains. As we know, these primitive reactions can easily overwhelm our modern, "rational" processes....

They are part and parcel of our uniquely human consciousness. However, that's not to say that a perfectly functional consciousness could not be fabricated without emotions. If we were programming such a consciousness, and we wanted some sort of "moral" component....At that point we could no doubt build it in....Something along the lines of Asimov's laws.

Should we get to that point....

Then we must also ask if pleasure and pain will need to be programmed in. Otherwise, what's there to be emotional about?

And now another problem arises - an ethical problem. Like God, we are left to decide if we shall create a being capable of feeling pain so the feeling can be exploited to get us where we want to go. Our machines will wonder, just as we wonder, why their creator was such an ass. Maybe AI will tell us.
 
Like God, we are left to decide if we shall create a being capable of feeling pain so the feeling can be exploited to get us where we want to go. Our machines children will wonder, just as we wonder, why their creator was such an ass.

FTFY. :p
 
No, he actually led a tragic life in Beleriand.

As far as the OP, that have you been able to tell from my posts so far, do I pass the threshold?

Let's find out. Tell us what you see here:

picture.php
 

Back
Top Bottom