We have not achieved 100% altruistic behavior as a conscious species ourselves despite huge amounts of " programming" through research, education, communication and culture. What makes you think there is any possibility of programming 100% altruistic behavior into a conscious machine? Are you suggesting ethics is an objective science which can be mathematically proven? Because without 100% certainty of a conscious machine being altruistic, which conscious human wants to put there lives at risk to a conscious machine with superior brute force?
While it might not be impossible to achieve 100% altruism (There is nothing in the laws of physics to prevent it.),
I don't think it is a goal we can realistically expect to achieve. Science shows us that when we get close, everyone loses out to the remaining selfish jerks, even more-so. This is a counter-intuitive idea. But, here are a small number of things leading to it:
* We know that altruistic behaviors can evolve out of fundamentally selfish systems: We have seen altruism emerge spontaneously (without being explicitly programmed) in several evolutionary and neural net simulations. We have good evidence that this happens in the wild.
* However, all successful systems have parasites. (Even successful parasites have parasites.)
* When altruism of a population reaches very high levels, close to 100%, but not quite there:
They become too trusting. The whole population gets severely exploited by the remaining small population of parasitic and/or selfish entities in general. This becomes detrimental to the whole population, including (ironically) the exploiters (at least in the long run, not usually in the short term).
* Keeping a small percentage of non-altruistic members around actually benefits the population in the long run, because
it keeps everyone on their toes a little more. A small number of entities get exploited, so that more of them don't.
* A small percentage of humans have significant sociopathic or psychopathic tendencies: Usually around 1 or 2%, depending on what factors you look at.
Chances are, a strong A.I. would follow the same patterns.
But, if I am wrong, and computer-based A.I. can achieve truly 100% altruism, without any risk of exploitation, then: What's wrong with that? Why would that imply that the history of humanity is a joke?
We would still be around to enjoy our lives, even with 98% altruism.
There's a thing about consciousness that people don't get. You aren't really conscious and in control. You only think you are.
If that's true, it's still worth exploring how that "sense of being in control" comes about.
It's still an odd little mystery, at the moment. But, what we find out about the brain along the way has been fascinating, and should continue to be so.