• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial intelligence: Can machines be programmed with morality?

At some point, machines will be programmed with morals.

  • Strongly agree

    Votes: 11 47.8%
  • Somewhat agree

    Votes: 2 8.7%
  • Neutral/Maybe

    Votes: 5 21.7%
  • Somewhat disagree

    Votes: 1 4.3%
  • Strongly disagree

    Votes: 4 17.4%

  • Total voters
    23

jay gw

Unregistered
Joined
Sep 11, 2004
Messages
1,821
[SIZE=-1]Moral:

Concerned with principles of right and wrong or conforming to standards of behavior and character based on those principles.
[/SIZE][SIZE=-1]Adhering to ethical and moral principles.
[/SIZE][SIZE=-1]Refers to what is judged as right, just, or good.
____

Can a machine be programmed with morals? Why or why not?


[/SIZE]
 
Of course a machine can be programmed with 'morals'.

Consider Asimov's Three Laws of Robotics:

1) A robot may not harm a human being, nor, through inaction, allow a human being to come to harm.
2) A robot must obey orders given it by a human being, except where such orders conflict with the First Law.
3) A robot must protect its own existence, except where such protection conflicts with the First or Second Law.

It would not be difficult to program a sufficiently advanced machine to obey such laws, which are, in effect, morals: harming a human, or allowing a human to be harmed, is immoral (in this code). Disobedience in a machine is immoral, unless it is to avoid or prevent harm. And so forth.

The catch is, the morals would be - as all morals are - completely subjective - subject, in this case, to the desires of the programmer in question. And as Asimov's stories readily demonstrated, even a strict moral code followed by a logical machine can sometimes lead to difficult problems and situations.

I do not doubt, in the least, that man will program future machines with some manner of moral code. Too many humans suffer from Frankenstein Complex to avoid it. Movies like The Terminator and The Matrix serve to underscore the importance of making machines with strict humanicentric morals.

Why would it be otherwise?
 
It would not be difficult to program a sufficiently advanced machine to obey such laws, which are, in effect, morals: harming a human, or allowing a human to be harmed, is immoral (in this code). Disobedience in a machine is immoral, unless it is to avoid or prevent harm. And so forth.
It would be extraordinarily difficult to create such a program.
 
I'm not an anti-technologist by any means, but I don't see how it would be possible.

The basis of all morality is what benefits the continuation of life. In a social group, this leads to "women and children first", and on a greater scale it leads to soldiers making the ultimate sacrifice to defend their nation and thereby their way of life.

It's not that a machine couldn't understand morality. It's that it's such a complicated issue with too many nebulous variables and conditionals that we ourselves don't understand it. It's possible we'll never understand it to a sufficient degree that we could impart these instructions to a machine successfully.
 
It would be extraordinarily difficult to create such a program.

I'm sure they said the same thing about holistic search engines, word processors that recognize grammatical errors, and online reality-based video games, as well, once. In fact, I know they once said the same thing about bipedal walking robots, robotic hands that can play the piano, and bipedal robots who can climb stairs. But we've got 'em now.

It would be extraordinarily difficult today. But since we're still in the infancy of AI research, I'd say there's more than a fair chance it will happen.

The machine that could implement said programming would have to be a memory powerhouse, though - that much is certain. Its pattern recognition system would have to consistantly be able to identify - well, everything that we do. Such a machine would be only a few steps away from human at that point, anyway.

Don't make the mistake so many people make, by assuming that, because we cannot do something now, that we will never be able to do it.
 
Phrost,

It's not that a machine couldn't understand morality. It's that it's such a complicated issue with too many nebulous variables and conditionals that we ourselves don't understand it. It's possible we'll never understand it to a sufficient degree that we could impart these instructions to a machine successfully.
How successfully? If we can reasonably say that people can be trained to be moral, then saying that a computer can be programmed with morality should not require that we be any more successful in imparting those instructions to a machine than we are at imparting them to people.

And if you ask me, that's not very successful at all.


Anyway, the way I see it, morality is just a matter of living in accordance with your values, whatever they may be. So the question I would ask is whether we can impart a machine with actual values, or will we always be limited to simply programming it in such a way that it has no choice but to do what it is told? I guess we are coming pretty close to the notion of free-will here. Can we make a machine which acts on its own initiative according to some set of values (either pre-programmed or learned)? Or will machines always just do exactly what they are told?

I am fairly confident that the former will end up being the case.


Dr. Stupid
 
I'm not an anti-technologist by any means, but I don't see how it would be possible.

The basis of all morality is what benefits the continuation of life. In a social group, this leads to "women and children first", and on a greater scale it leads to soldiers making the ultimate sacrifice to defend their nation and thereby their way of life.

It's not that a machine couldn't understand morality. It's that it's such a complicated issue with too many nebulous variables and conditionals that we ourselves don't understand it. It's possible we'll never understand it to a sufficient degree that we could impart these instructions to a machine successfully.

Morality, unfortunately, has come to include factors that do not actually have much to do with continuation of life. Religion and politics has so stirred up the pot, that morals are now like a code of desired behavior for some. For example, it was once considered (and still is, to some) immoral for a widow to remarry. Yet, by remarrying, the widow could reproduce and thus further the continuation of life.

As for it being a 'complicated issue', I agree - so any programmer seeking to add 'morals' to the OS of a machine would have to first clearly define what morals would be important for the machine. No matter what the programmer selects, though, someone will decide his morals are incomplete, incorrect, etc.

No, there is no way to program 'absolute' or 'perfect' morals into a machine - such morals simply don't exist. But there is no doubt that in the future, machines will be able to be programmed with morality, of one form or another.
 
So the question I would ask is whether we can impart a machine with actual values, or will we always be limited to simply programming it in such a way that it has no choice but to do what it is told?

Who said we have a choice but to do what we're told by our internal DNA programming?
 
No, it isn't - it's what benefits society.
Or the individual. But then, an argument could be made that is in itself a benefit to society, in that a system that is set up to protect and promote individual liberty naturally makes for a better society. So that ends up being "what benefits society", as well. But the practice and perspective are very different.
 
Or the individual. But then, an argument could be made that is in itself a benefit to society, in that a system that is set up to protect and promote individual liberty naturally makes for a better society. So that ends up being "what benefits society", as well. But the practice and perspective are very different.

Morality is just a natural version of law - a set of codes evolved by a particular society over time. It is subjective - but to a group, not an individual.
 
I'm sure they said the same thing about holistic search engines, word processors that recognize grammatical errors, and online reality-based video games, as well, once. In fact, I know they once said the same thing about bipedal walking robots, robotic hands that can play the piano, and bipedal robots who can climb stairs. But we've got 'em now.
Fifty years ago, a research team at MIT thought they would work out how the visual system worked over the summer, then move on to bigger problems. Neither they nor anyone else yet understands precisely how we process visual information, nor can be duplicate the process in a machine.

Building moral principles that rely on extremely complex and poorly-defined concepts directly into a machine designed to flexibly learn is something we can't even begin to guess how to do.
 
Fifty years ago, a research team at MIT thought they would work out how the visual system worked over the summer, then move on to bigger problems. Neither they nor anyone else yet understands precisely how we process visual information, nor can be duplicate the process in a machine.

Building moral principles that rely on extremely complex and poorly-defined concepts directly into a machine designed to flexibly learn is something we can't even begin to guess how to do.

...yet.

Honestly, Melen, do you believe that there are some things man will never learn how to do? Are you seriously in the camp that some scientific knowledge is forever out of mankind's grasp? Given the scale of human advancement, 50 years is a drop in the bucket. Or are you guessing that man will go extinct long before they solve the problem of artificial morality? I'm curious to know what you think... why you think it can never be done.

:dragon:
 
It would not be difficult to program a sufficiently advanced machine to obey such laws, which are, in effect, morals: harming a human, or allowing a human to be harmed, is immoral (in this code).

What happens if the military wants robots for war?
 
What happens if the military wants robots for war?
The robots could be built without such moral restrictions. In the same way that the military can get nukes when the rest of us can't, some day the military will get killer robots when the rest of us can't.
 
We don't even know that it's theoretically possible to put Asimov's Laws into practice, much less practically possible. Hell, we don't even know if humans can be taught principles they can be guaranteed not to violate.

There are some very serious and real questions here that can't just be handwaved away by saying "in the future, we'll surely learn how".
 
Morality is just a natural version of law - a set of codes evolved by a particular society over time. It is subjective - but to a group, not an individual.
I think we might be talking about different things. I am referring to whether or not the system of morality benefits individuals, not whether or not it is subjective to individuals.

For example, society at large would not be affected at all by the murder of a homeless bum in an alley somewhere. (I know that sounds cruel and mean, but its true. You and I would never even know it happened.) But that doesn't mean that the murder is morally acceptable. It would still be an immoral act. Because of the rights of the individual.

I'm not saying it is an all-or-nothing thing between what's good for society and what's good for individuals. I am saying that there are instances of where each are important.
 
There are some very serious and real questions here that can't just be handwaved away by saying "in the future, we'll surely learn how".

I agree. Some things, we will have to wait and see. And although some things might turn out to be possible, they might be possible in different ways that we envision. People were dreaming of going to the moon for long before it happened. But I'm sure that the Apollo program wasn't exactly what they had in mind. Time will tell what the future holds. It will be very interesting.
 
The robots could be built without such moral restrictions. In the same way that the military can get nukes when the rest of us can't, some day the military will get killer robots when the rest of us can't.

So military robots will be the same as nuclear bombs? Lots of countries have those.
 
So military robots will be the same as nuclear bombs? Lots of countries have those.
I'm only saying that the issue of a set of moral laws for robots does not conflict with the idea of the military getting robots to fight, because they will be constructed for particular functions, and the usual laws/standards do not apply. I used nukes as an example of where the military gets to have much more dangerous stuff than the regular citizens.
 

Back
Top Bottom