• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Sentient machines

So is it alive, or not? How do you know?

Being alive and being conscious are two different things. The problem comes from the definition of life; it's either falls too short or goes too far. On the one had, it doesn't include viruses and some forms of intracellular parasites, and on the other, it includes things like fire, and self replicating RNA. I'd say, no, a computer isn't 'alive' in the sense we use it. It doesn't replicate, it doesn't undergo any cell processes, it doesn't undergo respiration, etc.

It is, however, conscious.
 
So is it alive, or not? How do you know?

My own opinion: I'm not really sure. But if you put a gun to my head and forced me to choose one way or the other, I would consider it to be a new life.

Freakshow,

The question should not be whether or not a machine is conscious. There is a simple NO or YES answer for that depending on what you believe.

The important question, that has been around for thousand of years, is: what is consciousness?, does it exist in the first place?.

How on Earth are scientists going to create a machine that is conscious if they deny that consciousness exist???

If I remember well, in order to have a machine that is indistinguishable from a human being, this machine would have to behave and think in such a way that it is possible to reduce those subjective experiences to objective processes, am I wrong??.

Well, I don't question whether or not scientists can create this machine. Of course they can. The thing is, they still don't have a way to obtain a complete objective description of the subjective. They can't.
 
Q-Source, you obviously do not understand the materialist stance on consciousness if you claim that they "deny that consciousness exists".
 
Even forgetting everything we know about how the brain works, and about the role it plays in things like thinking, remembering, perceiving, and so on. Even without any of that knowledge, we can look at other people, observe that they behave very similarly to the way we do, and observe that there are no apparent differences between us. Given such observations, the most reasonable conclusion is that they are exhibiting this behavior for the same reasons we are, and are therefor also conscious.

How can you be sure that the rest is conscious?, what if we all are p-zombies?. According to your reasoning, we all must be p-zombies instead.

I don't understand your language, why is "consciousness" still in your vocabulary?. If you think that subjective=objective processes, then at least, you should be consistent like Daniel Dennett and eliminate the word consciousness for once and all.
 
Q-Source, you obviously do not understand the materialist stance on consciousness if you claim that they "deny that consciousness exists".


As I said to Stimpson, if you are a materialist or reductionist, at least you MUST be consistent with your arguments.

Otherwise, tell me which is the materialistic definition of consciousness.
 
If I remember well, in order to have a machine that is indistinguishable from a human being, this machine would have to behave and think in such a way that it is possible to reduce those subjective experiences to objective processes, am I wrong??.

Well, I don't question whether or not scientists can create this machine. Of course they can. The thing is, they still don't have a way to obtain a complete objective description of the subjective. They can't.
I probably don't understand what you're trying to say, because it seems to me that you're obviously contradicting yourself.

If a machine could be created which acts like a person, then either it is not necessary to "reduce [...] subjective experiences to objective processes" in order to build such a machine, or else, if it is necessary it is also possible.
 
Since consciousness and sentience are not exclusive to humans there should be no reason to deny that the computer is conscious.
 
As I said to Stimpson, if you are a materialist or reductionist, at least you MUST be consistent with your arguments.

Otherwise, tell me which is the materialistic definition of consciousness.

As I said, you do not understand that which you are arguing against. Materialists do not hold that consciousness does not exist, but rather that consciousness as a seperate entity from the brain does not exist. Consciousness is the brain.
 
What if you give the machine the definitions of consciousness, life, etc, then ask it if it's sentient or not. Do you think it'd blow a fuse?
 
I'm glad I'm not the only other person in the world that liked "I, Robot". :) I absolutely loved the movie. I don't think it gets enough credit.

huh, a guy who loves I Robot and doesnt like Star Trek... scary. I Robot, while still have some of what Asimove tried to say, its nothing but an american action movie, exaggerated ad nauseam. Star Trek, on the other side, its a fairly good series (both the shows and the pictures) about genuine science fiction. Anyway, continue with the topic please.
 
I don't believe it's possible to simulate consciousness at all. Sure, it may one day be possible to write a deterministic computer program which can act superficially like a human (make convincing small talk, respond to queries from a store of general knowledge using natural-sounding language, and so on). Such simulations might even fool a lot of people under controlled conditions (even Eliza and friends fool some people, somehow) but this imitation can only go so far.

In order to mimic the full complexity of human behavior, there will be a limit to the amount the machine can "cheat." For example, take creativity and intuition. How could you program a machine to reproduce these? It's absurd to think there could be something as simple as a "creativity constant" in the source code you could go in and tweak to turn your AI into a genius. The only way things like that will exist is if they are emergent properties of the system, the same way they arise in us. In order for that to happen, the machine must be on the same order of complexity as humans themselves. I would argue that, by that point, you are no longer simulating consciousness, but reproducing it.

As for I, Robot, I thought it was halfway decent too. I think a lot of SF fans disliked it more than they should've because they were hoping for something close to the book (which is not filmable, in my opinion) and were let down by the fact that it was a conventional sci-fi action movie that had almost nothing to do with Asimov's stories.

It's a lot easier to like when you know it was intended to be a completely original story called Hardwired until the studio got the rights to the book and insisted the filmmakers change the title and throw in some of Asimov's character names and plot points. It's still a conventional sci-fi action movie, of course, but it's more thoughtful than most.

Jeremy
 
I believe that is what we are talking about, toddjh: A machine as complex as a human brain. If you do not think that this would be conscious, I'd like to hear where you think consciousness comes from.
 
I believe that is what we are talking about, toddjh: A machine as complex as a human brain. If you do not think that this would be conscious, I'd like to hear where you think consciousness comes from.

I think you misunderstood. What I am saying is that I think the original question is moot because it would be impossible to merely simulate a human-like level of consciousness. The only way we could succeed in making such a machine is to actually make it conscious, so the question of whether or not it's sentient is answered right from the start.

Jeremy
 
I believe he is asking if consciousness can be in something that isn't human.

Also, think of it another way: Could me make a program that was conscious?

ETA: Although I agree with you on your point.
 
I believe he is asking if consciousness can be in something that isn't human.

Well, I think the answer to that is trivially yes. There are other species that are obviously conscious to a certain degree.

Also, think of it another way: Could me make a program that was conscious?

I don't think you could make a program that could mimic consciousness perfectly, but I think you could make one that creates it. I can't think of a reason why a complete software simulation of a human brain shouldn't result in the same emergent properties that a real brain does.

It wouldn't be a fancy version of Eliza, though. I think a lot of the current attempts at AI are doomed to failure for that reason: they attempt to cheat and recreate the high-level effects of consciousness without reproducing the underlying causes. They'll all hit a brick wall sooner or later and get stuck.

Jeremy
 
...... The only way we could succeed in making such a machine is to actually make it conscious, so the question of whether or not it's sentient is answered right from the start.

Jeremy
Exactly.. Once you have simulated conciousness, it is no longer a simulation..



As a side to this.. There seems to be this inferrence, that once we have succeeded in creating intelligience, there is a question of how will we treat ( act toward ) it; or that it might be entitled to rights or some such.
Why should we treat it any better ( or worse ) than any other sentient beings ( including ourselves ) that we are already aware of?

Why should we think twice about flipping a switch on a machine, any more than we think about dropping a bomb or pulling a trigger?
 
As a side to this.. There seems to be this inferrence, that once we have succeeded in creating intelligience, there is a question of how will we treat ( act toward ) it; or that it might be entitled to rights or some such.
Why should we treat it any better ( or worse ) than any other sentient beings ( including ourselves ) that we are already aware of?

There's been a thread or two about this before. I think it will be a complete non-issue.

First, nobody is going to be making AIs just for the hell of it (except for maybe some prototypes to show it can be done). We're going to make them to do specific tasks. Given that, I fully expect they will be "programmed" (or conditioned/evolved/whatever term you want to use) to want to do what we tell them to do -- there's no better motivation than desire.

They're sentient and many people (myself included) will think that means they are entitled to the same legal rights as humans wherever practical. However, I think their "programming" will make this a moot point. They won't care about their rights. All they'll care about is doing their job, and they'll be perfectly content with that arrangement. They'll be obsessive workaholics.

In fact, if we ever get to the point where there are "free the AIs" rallies and the like, I fully expect it's the AIs themselves who will laugh at them and tell them to mind their own damn business.

Someone (I forget who, sorry!) thought that it would be a very similar situation to the house elves in the Harry Potter books, who sneer at anyone trying to "free" them, and whose worst fear in the world is losing their jobs.

Jeremy
 

Back
Top Bottom