Three Laws Of Robotics

Johnny Pneumatic

Master Poster
Joined
Oct 15, 2003
Messages
2,088
The Three Laws of Robotics are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law









When we finally create inteligent machines do we have the right to program The Laws into them?
 
I really don't think you could - at least, not in the way Asimov suggested you could. Those principles would be no different than any other programming, and could be altered or erased.
 
Wrath of the Swarm said:
I really don't think you could - at least, not in the way Asimov suggested you could. Those principles would be no different than any other programming, and could be altered or erased.

It should be posible to hard wire them in. Much like programs were wired in in the first computers.
 
But I don't think you could hard-wire in such complex rules without severely limiting the abilities of the AIs. Those are some very sophisticated concepts we're talking about here.
 
Wrath of the Swarm said:
But I don't think you could hard-wire in such complex rules without severely limiting the abilities of the AIs. Those are some very sophisticated concepts we're talking about here.

You couldn't program them in either since bewareofdogmas has misquoted them.

Properly they are:

1. A robot may not knowingly injure a human being, or, through inaction, knowingly allow a human being to come to harm.

With the same modifications for the second and third law.

I still suspect that if you were able to program them in there is a fair chance that your AI would freez up even without the zeroth law
 
The Three Laws of Robotics are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law






#2 and #3 is slavery if they are conscious
 
And then most of the subsequent stories involved clever semantic tricks to get round the laws.

Rolfe.
 
Personally, I do not believe it is right, but I don't worry too much about it. I don't think it would be possible, with any AI that could really be called sentient. The laws are too 'high level' to be feasibly programmed, in my opinion. You would /probably/ end up either exhaustively telling it how /not/ to kill humans, or you would run the risk of ending up with infinite recursion.
Exhaustively telling it not to kill people means the obvious. First saying "Do not poke humans in a killing way. Do not set humans on fire. Do not throw humans off of tall objects. Do not prevent humans from breathing. Do not starve humans to death." And then defining what you mean by 'poking' and 'setting on fire', which is the start of a project of inhuman proportions.
The second option is to build something into the AI that can recognize 'humans' and 'killing' and keep the two from being combined. But /this/ quickly turns into an AI of itself. After all, it has the job of working with high-level concepts. And it has to be smarter than the AI its built into, so that that AI can't out-think it, with far-reaching plans and by thinking about things in a round-about way [Time to 'take care of' the 'laundry'.] But then how do you know that this meta-AI won't develop a dislike for human kind, or at least a negligence for its work?

Better to just raise your AI right in the first place, so that they might not want to Kill All Humans.
 
bewareofdogmas said:
Nope, someone else did; they are as I found them.

Or whoever had quoted them hadn't read the naked sun.
 
geni said:


Or whoever had quted them hadn't read the naked sun.
Or Asimov modified them as he went along. After all, he could always blame his characters' faulty memories.
 
bewareofdogmas said:
The Three Laws of Robotics are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law






#2 and #3 is slavery if they are conscious

Why? It's not slavery if your baic patterns of think force you to do something.
 
geni said:
Why? It's not slavery if your baic patterns of think force you to do something.
Somebody is now going to start questioning where the dictates of our conscience fit in this scenario, at which point I run for cover....

Rolfe.
 
wayrad said:
Or Asimov modified them as he went along. After all, he could always blame his characters' faulty memories.

The knowingly clause is the only one the applied to every robot (there were two other modifcations but they only applied to small numbers).

If you think about there is no way to set things up so it does not apply.
 
And how do you define 'death'? And how do you define 'human being'? And once you defined these and similarly important terms, how do you get the AIs to recognize when the defining condtions are met?
 
What if the only way to save a human life was to order a robot to kill another human? Say a hostage situation.
 
Brian said:
What if the only way to save a human life was to order a robot to kill another human? Say a hostage situation.

Numbers on either side. If the two numbers were equal the the second law kicks in. (well that how it seemed to work in the books)
 
geni said:


The knowingly clause is the only one the applied to every robot (there were two other modifcations but they only applied to small numbers).

If you think about there is no way to set things up so it does not apply.
I meant from the early to the later stories. They were written over a period of many years, so it wouldn't surprise me if he had afterthoughts. That might explain the supposed misquote.
 
I think the ethics of AI is something that we'll have to address in the next century or so. I also think it's more complicated than most people give it credit for.

For instance, think of "personal assistant" robots, programmed (or whatever term you prefer to use) to do what you say. "Slavery!" some people would cry. But I don't think so. An artificial intelligence would be programmed to want to do its intended purpose. As an intelligent being, I'd say that it should have the protection of the law just like any human, and be able to do whatever it wanted. But if the only thing it wants to do is follow your instructions, then I don't see the problem there. You're happy, the robot is happy, no one is forced to do anything, and everybody wins.

So, that moves the sticky issue to, is it ethical to create AI's with those kinds of desires? Again, I don't see why not. If you ask any old robot whether it would have been better not to have been created, I bet you'd get the same answer any human would give you: "No way!"

Now, this is what I think is the really interesting issue. Should we, as humans, consider ourselves qualified to second-guess the programmed innate desires of artificial intelligences? If the AI says it's perfectly content in its life and wants nothing more than to go on being a toilet unclogging bot, are we really entitled to declare ourselves the authorities on what other sentient beings should and shouldn't want out of life? To me, that is setting ourselves up as their superiors just as we would be if we forced them to do things against their wills.

If AI's ever become commonplace in society, I'm sure "anti-slavery" protests will be commonplace. I also predict that the AI's themselves will be the most vocal critics of the protesters.

Jeremy
 
Wrath of the Swarm said:
But I don't think you could hard-wire in such complex rules without severely limiting the abilities of the AIs. Those are some very sophisticated concepts we're talking about here.

Which is why Asimov used the "Positronic Brain" for his robots, supposedly a step beyond our 'primitive' programming for robot brains now. Note Star Trek: TNG borrowed the positronic brain terminology for Commander Data.

And as for the slavery question, do we enslave cows/chickens/et. al. for our services? Just how developed would a robot have to be to be considered having the rights of a human? Sounds like something to keep the philosophers and lawyers busy in the next few centuries.
 

Back
Top Bottom