• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Sentient machines

Freakshow

Unregistered
Joined
Jul 7, 2005
Messages
3,319
As you'll soon see, this does belong here as a philosophy discussion, and not in the computers and technology section. :)

Take the following two points as being given:
  1. You have a test for evaluating sentience and conciousness. What this test is actually comprised of isn't really relevant to this discussion. The test is 100% flawless. It relies entirely on observation of and interaction with the subject being tested. (Not everyone knows what the Turing Test is, which is why I did not use that phrase.)
  2. Someone has created a computer that passes this test.
  3. The person who created the computer has died. We know little of how he was able to accomplish the creation of this computer.
  4. It is not acceptable to tear the computer apart for the purposes of attempting to reverse-engineer it.
  5. No, I am not getting this idea from that Star Trek Episode where they try to determine if Data is alive. :) I actually don't like Star Trek.
The question is...do you now conclude that this computer has attained consciosness and sentience? Or do you just conclude that it is an inanimate object that is pretending to be concious? If you make the conclusion that it is inanimate and is just pretending, then how do you do so, without circular reasoning?

Because if you make the conclusion that it is not alive, and is just pretending, how do you know that other humans are alive, and aren't just pretending? You can't inhabit other's minds. You can't live their experiences. All you can do is observe and evaluate. These observations and evaluations tell you that others are conscious. While telling you at the same time that someone in a vegetative state is not.

So what's the difference? You don't have some magical insight into other human's existence, and you don't have some magical insight into the computer's existence. Sure, you can say "Well, I know its electicity and silicon", but similar things can be said about the human brain. That is therefore not an acceptable reply.

So is it alive, or not? How do you know?

My own opinion: I'm not really sure. But if you put a gun to my head and forced me to choose one way or the other, I would consider it to be a new life.
 
I agree with you.

As far as inquiring into why people might say otherwise: I think most people have simply learned to associate "personness" with things that look human. (There may also be inborn programming involved.) Put that machine in a human-looking body, let it walk and talk, and people would treat it like any other person if they didn't know about its insides.
 
"Do Androids Dream of Electric Sheep?" is a good place to start. You'd probably know it better as Blade Runner.

But anyway, the problem you describe is a p-zombie. It's something that looks sentient but is actually programmed to imitate sentient behavior. Either the imitation will eventually diverge and it's going to be really obvious, or you'd have to know everything about the universe in order to design a precise mimicry mechanism. Otherwise, if it's made to take in information and respond in ways that appear indistinguishable from self-awareness, it's going to become self-aware as a consequence.

The other option is solipsism, where everyone but you is a p-zombie.
 
As you'll soon see, this does belong here as a philosophy discussion, and not in the computers and technology section. :)

Take the following two points as being given:
  1. You have a test for evaluating sentience and conciousness. What this test is actually comprised of isn't really relevant to this discussion. The test is 100% flawless. It relies entirely on observation of and interaction with the subject being tested. (Not everyone knows what the Turing Test is, which is why I did not use that phrase.)
  2. Someone has created a computer that passes this test.[/b]

  1. So, in other words, assume that this computer is sentient andd concious.

    [*]No, I am not getting this idea from that Star Trek Episode where they try to determine if Data is alive. :) I actually don't like Star Trek.
The question is...do you now conclude that this computer has attained consciosness and sentience? Or do you just conclude that it is an inanimate object that is pretending to be concious? If you make the conclusion that it is inanimate and is just pretending, then how do you do so, without circular reasoning?
Well, given that we've already assumed that the computer is sentient and concious, I don't see how I would come to any other conclusion...

But I would ask, how is it at all possible to design a test that is 100% flawless at determining if something is concious? I can imagine that such might be possible in the future when we have a better understanding of what conciousness is, physically speaking, but right now?

So what's the difference? You don't have some magical insight into other human's existence, and you don't have some magical insight into the computer's existence. Sure, you can say "Well, I know its electicity and silicon", but similar things can be said about the human brain. That is therefore not an acceptable reply.
Except that I know that the other human's brain evolved by natural selection. I know that we share a common ancestor and a whole lot similarity in our DNA. I know that conciousness, whatever it is, is a complex thing, and probably didn't evolve just randomly, and if it was "selected for" then it's probably common the the entire species.
Basically, I'm similar enough and have similar enough origins to other human beings to be able to say, if it's true of me and appears to be true of other humans, it probably is.

But a computer has completely different origins to me, I can't make that assumption, and so we need to go deeper to understand where the observed "conciousness" comes from. Why does the computer act concious? Is it because it is, or because it's been designed specifically to appear so?

So is it alive, or not? How do you know?
This is a good question. How do we determine when a computer should be called "alive"? I certainly agree that computers could one day be alive, that there could be computers that would fit in that definition.
But I also doubt that computer life will be anything at all like human life. Even things like self-preservation will have to be programmed in, just like they were for us, so to think that any of our eccentricities (like conciousness) will just arrise out of intelligence, is making one assumption too many for me.

My own opinion: I'm not really sure. But if you put a gun to my head and forced me to choose one way or the other, I would consider it to be a new life.
Given the initial assumption that we have a 100% flawless test of conciousness, I would agree that it should be termed alive. Difficult word, that, isn't it? That just because something is concious we would call it alive, and yet we would also call a bacteria alive. But what do a concious non-replicating computer and a bacteria have in common?
Damned if I know.
 
The question is...do you now conclude that this computer has attained consciosness and sentience? Or do you just conclude that it is an inanimate object that is pretending to be concious? If you make the conclusion that it is inanimate and is just pretending, then how do you do so, without circular reasoning?
One of the truly great philosophical questions. The ghost in the machine. I've moved from a proponent to an agnostic on the subject of dualism but I'm still interested in the when and how sentience emerges and how would we know if another entity was possesed of it?

Along with Blade Runner ("Do Androids Dream of Electric Sheep?") see:

Artificial Intelligence: AI
2001: A Space Odyssey
Bicentennial Man
I, Robot

Bicentennial Man is a bit preachy and I, Robot mostly just an action movie but worth seeing if your interested in the question. Of course there are many Sci Fi books dealing at least in part with the subject.
 
One of the truly great philosophical questions. The ghost in the machine. I've moved from a proponent to an agnostic on the subject of dualism but I'm still interested in the when and how sentience emerges and how would we know if another entity was possesed of it?

Along with Blade Runner ("Do Androids Dream of Electric Sheep?") see:

Artificial Intelligence: AI
2001: A Space Odyssey
Bicentennial Man
I, Robot

Bicentennial Man is a bit preachy and I, Robot mostly just an action movie but worth seeing if your interested in the question. Of course there are many Sci Fi books dealing at least in part with the subject.
I'm glad I'm not the only other person in the world that liked "I, Robot". :) I absolutely loved the movie. I don't think it gets enough credit.
 
Roboramma,

Well, given that we've already assumed that the computer is sentient and concious, I don't see how I would come to any other conclusion...

But I would ask, how is it at all possible to design a test that is 100% flawless at determining if something is concious? I can imagine that such might be possible in the future when we have a better understanding of what conciousness is, physically speaking, but right now?
100% flawlessness isn't really necessary for Freakshow's argument to work. All that is needed is that the AI's behavior be indistinguishable from that of a sentient being.

Unless I am misunderstanding Freakshow, the argument can be rephrased something like this:

The only reasons any of us have for believing that anybody else possesses consciousness like we do, is by observing their behavior. So the question is, if some machine exhibited the same type of behavior as a human being, would there be any criteria by which we could determine that the human being is sentient, and the machine is not?

The answer is quite clearly "NO".

So we are left with either solipsism, or accepting that the machine is also sentient. Any belief that the human is sentient and the machine is not would have to be based on the preconceived belief that machines cannot be sentient. Hence the circularity part.


As far as I can see, the only real question which remains is whether or not, with respect to other people, it is actually rational to believe that they have consciousness or not.

I say that it is. My argument goes like this:

1) I know that what I think of as my "consciousness" affects my behavior, and does so in a very strong way.

2) There is extremely strong scientific evidence that human behavior is completely controlled by the brain, and that there is not any mysterious "stuff" interacting with the brain to influence human behavior.

3) My brain, and my behavior, are more or less the same as every other human being, indicating that it is extremely unlikely that, while other people's behavior is caused by their brains (as science indicates), my own is not.

4) I thus conclude that my own behavior is caused by my brain, which means that whatever it is which I think of as my "consciousness", must be something my brain is doing.

5) I thus conclude that other human beings are also conscious.

Note that if I could not conclude (4), it would not be rational for me to conclude that other people are also conscious at all. That is the thing that really boggles my mind about people who insist that there is more to consciousness than brain activity. If they really believe this, then they have absolutely no rational justification for believing that anybody other than themselves possesses this additional component!


That's how I see it, anyway.


Dr. Stupid
 
Roboramma,


100% flawlessness isn't really necessary for Freakshow's argument to work. All that is needed is that the AI's behavior be indistinguishable from that of a sentient being.

Unless I am misunderstanding Freakshow, the argument can be rephrased something like this:

The only reasons any of us have for believing that anybody else possesses consciousness like we do, is by observing their behavior. So the question is, if some machine exhibited the same type of behavior as a human being, would there be any criteria by which we could determine that the human being is sentient, and the machine is not?

The answer is quite clearly "NO".

So we are left with either solipsism, or accepting that the machine is also sentient. Any belief that the human is sentient and the machine is not would have to be based on the preconceived belief that machines cannot be sentient. Hence the circularity part.


As far as I can see, the only real question which remains is whether or not, with respect to other people, it is actually rational to believe that they have consciousness or not.

I say that it is. My argument goes like this:

1) I know that what I think of as my "consciousness" affects my behavior, and does so in a very strong way.

2) There is extremely strong scientific evidence that human behavior is completely controlled by the brain, and that there is not any mysterious "stuff" interacting with the brain to influence human behavior.

3) My brain, and my behavior, are more or less the same as every other human being, indicating that it is extremely unlikely that, while other people's behavior is caused by their brains (as science indicates), my own is not.

4) I thus conclude that my own behavior is caused by my brain, which means that whatever it is which I think of as my "consciousness", must be something my brain is doing.

5) I thus conclude that other human beings are also conscious.

Note that if I could not conclude (4), it would not be rational for me to conclude that other people are also conscious at all. That is the thing that really boggles my mind about people who insist that there is more to consciousness than brain activity. If they really believe this, then they have absolutely no rational justification for believing that anybody other than themselves possesses this additional component!


That's how I see it, anyway.


Dr. Stupid
Great post! I agreed with almost all of it. :) I'm a little unsure of the last paragraph, but just need to read it some more tomorrow. :)

The only reason I specified "100% flawless" (you are right, it isn't absolutely required) is to help box the reader into a position where they are forced to focus on the philosophical matter at hand, instead of going into the issues of whether or not what is given is possible or reliable.
 
Freakshow,

Great post! I agreed with almost all of it. I'm a little unsure of the last paragraph, but just need to read it some more tomorrow.
Actually, looking back at what I said, my last paragraph doesn't come across quite as I intended. What I should have said is that it is step (4) which allows me to conclude that other people are also conscious. But really step (4) is a bit stronger than what is actually needed to reasonably draw this conclusion.

Even forgetting everything we know about how the brain works, and about the role it plays in things like thinking, remembering, perceiving, and so on. Even without any of that knowledge, we can look at other people, observe that they behave very similarly to the way we do, and observe that there are no apparent differences between us. Given such observations, the most reasonable conclusion is that they are exhibiting this behavior for the same reasons we are, and are therefor also conscious.

The scientific evidence concerning the brain just serves as supporting evidence for this hypothesis. It turns a reasonable theory into well-supported knowledge about how things work.

But again, if one presumes that there is some aspect of consciousness which is not reflected in our behavior, then there is no longer any basis for concluding that other people possess it.

So basically what the scientific evidence rules out is the notion of some sort of "ghost in the machine" which is conscious, and somehow interacts with our brains to produce our behavior.


Dr. Stupid
 
Some things that we have that computers do not (what can be measured)
1. General knowledge (maybe).
2. Know lots of rubbish. Like what happened to us yesterday; what happens in a restaurant
3. The ability to evaluate this knowledge.
4. The ability to speak like a human.
5. The ability to see and analysis what you see.


All of the above are very difficult for a computer to have; yet, we all do easily.

The only bit I do not understand is what is ‘consciousness and sentience?” that is part of the original question. It is talked about a lot but never defined.
 
As you'll soon see, this does belong here as a philosophy discussion, and not in the computers and technology section. :)

Take the following two points as being given:
  1. You have a test for evaluating sentience and conciousness. What this test is actually comprised of isn't really relevant to this discussion. The test is 100% flawless. It relies entirely on observation of and interaction with the subject being tested. (Not everyone knows what the Turing Test is, which is why I did not use that phrase.)
  2. Someone has created a computer that passes this test.


  1. As was pointed out, these two assumptions jointly are equivalent to the assumption that the computer is sentient and conscious.

    Your question can therefore be rephrased as "Is a sentient and conscious computer sentient and conscious." To which the answer should be apparent upon a few hours contemplation.
 
Some things that we have that computers do not (what can be measured)
1. General knowledge (maybe).
2. Know lots of rubbish. Like what happened to us yesterday; what happens in a restaurant
3. The ability to evaluate this knowledge.

The Cyk project (sp?) was based on the assumption that humans were little more than gigantic bags of learned knowledge, and the reason all AI attempts were miserable failures was because the machines lacked this knowledge.

So they'd type in knowledge in teams all day long, then let the machine "think" about it all night, and ask them questions the next morning.

It came up with some interesting observations, that are thinks you actually had to learn yourself, but now never think about, like "If I turn around, is the thing behind me still there?"
 
As was pointed out, these two assumptions jointly are equivalent to the assumption that the computer is sentient and conscious.

Your question can therefore be rephrased as "Is a sentient and conscious computer sentient and conscious." To which the answer should be apparent upon a few hours contemplation.
That actually wasn't what I meant to convey, but I realize now where the confusion comes from. I'll have to work on rewording it somewhat for future use. Thanks for filing the bug. ;)

The point here isn't actually to talk about computers, but to talk about "what is consciousness"? I fall in the camp that thinks that we human beings are nothing more than organic computers. I don't believe in dualism, and have not been able to find any claim for it other than "Well, we humans are special somehow. We have SOULS!"

So, if people that believe that (humans have souls, and that is the explanation for our consciousness) are confronted with this situation...how do they react? Do they say "Someone just managed to come up with a computer that can flawlessly imitate a conscious being, and therefore pass this flawless test. It is a phenomenal trick, but still just a trick", or do they say "Hey, I was wrong. Machines CAN be concious. So maybe we don't have souls, after all. Maybe we humans are just sentient computers, too."
 
Last edited:
Unfortunately, no 100% test can be derived to demonstrate any of us is conscious. If you accept that machines may indeed be sentient & conscious, presumably you also accept you can be exactly replaced by compute power and i/o sensors/servos. Identifying a programmer you would trust to do so might be a problem though.

As to solipsism, I've never heard a iron-clad argument disproving it; best I can do is by gentlemens' agreement state that we both should act as if neither of us is The Solipsist, should such exist; i.e. I don't "think" I'm The Solipsist. Thought Exists remains as the only tautology, otherwise we would not be doing what we are doing, or "think" we are doing.

I remain unconvinced that I am solely my behavior as perceived.
 
The point here isn't actually to talk about computers, but to talk about "what is consciousness"? I fall in the camp that thinks that we human beings are nothing more than organic computers. I don't believe in dualism, and have not been able to find any claim for it other than "Well, we humans are special somehow. We have SOULS!"

So, if people that believe that (humans have souls, and that is the explanation for our consciousness) are confronted with this situation...how do they react?

Well, if you accept as an assumption that only things with souls are conscious (and that only humans have souls), then it's not circular reasoning to reject the idea of a non-souled thing not being conscious. (You obviously do not accept that assumption, and in fact make the contrary assumption. This is good. Assumptions are good. I like assumptions -- but I also like to keep track of them, so that they can be rejected or amended as necessary.)

So we now have this hypothetical "test for consciousness." The obvious question is "Does it work?" If you also assume that the test is flawless (which I think is a much less valid assumption -- any behavioral test will have a chance of being passed "through sheer dumb luck."), then our second assumption is incompatible with our assumptions about souls. One of them has to go.

Which is more likely, that I'm wrong about souls, or that I'm wrong about the ability of humans to construct and administer a "flawless" test? I would argue in this circumstance that the second is much less likely -- humans have never been able to do anything "flawlessly," and the idea that this one test, one for which we have no possible way of cross-checking its results, happens to be the one thing we can do perfectly, defies credibility.

A more reasonable scenario is where this test is "highly accurate" (which you can formalize at any arbitrary level of accuracy if you need to). The question then becomes -- which is more likely, that the test missed on this one instance, or that the assumption about souls is incorrect. Again, I'd want to know how you demonstrated how accurate the test was -- and how you distinguished a test for "humanness" from a test for "consciousness." I don't think the epistemological machinery is there for you to be able to say, with conviction, that the test is "highly accurate."

Basically, this works to an epistemological question. Yes, there is possibly evidence that could falsify the assumption that humans and only humans are conscious. But I can't see how that evidence could itself be acquired, or the validity of the evidence established. It's a bootstrapping problem. I can't develop a unicorn detector without a theory of unicorns, and I can't establish a theory of unicorns without the ability to detect at least one unicorn to give me some data.
 
Some things that we have that computers do not (what can be measured)
1. General knowledge (maybe).
2. Know lots of rubbish. Like what happened to us yesterday; what happens in a restaurant
3. The ability to evaluate this knowledge.
4. The ability to speak like a human.
5. The ability to see and analysis what you see.


All of the above are very difficult for a computer to have; yet, we all do easily.

The only bit I do not understand is what is ‘consciousness and sentience?” that is part of the original question. It is talked about a lot but never defined.

I think to this list you can add something about being aware of self, fearing death, and the ability to imagine other realities.
 
I guess when it comes down to this I think we really need to understand conciousness, and understand the brain before we can create a workable test to see if something is concious.
What is conciousness, really?
What good is it for a brain? Is it something that comes about because of the other functions of the brain, or is it something that is there specifically for a purpose?
Ie. Does being intelligent produce conciousness or is conciousness necessary for intelligence?

Or is that a poorly framed question?

One reason for this, and one problem that I see with the Turing Test, but also with any test of conciousness, is that there are always multiple solutions to problems. Just because someone can program a computer to recognise a letter doesn't mean the computer is recognising it the same way that a human is.
On the other hand, I think that a computer that had to work within the limits of the human brain's information processing power that could still do all the things that we humans do would probably have to be concious.

But that's just a guess.
 
There's actually two different questions here that are getting confused.

The first question is "Is the computer conscious and sentient?" and the answer to that question is (with our current understanding of consciousness and sentience) a resounding "We don't know."

The other question is "Should we consider the computer as conscious and sentient?" and the answer to that question is "Yes, as it appears to be, and we don't know differently."
 

Back
Top Bottom