• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

[Ed]Robot slaves

becomingagodo

Banned
Joined
Mar 19, 2007
Messages
695
Quantum computers are going to be built soon. The moore's law will come to a end soon, and be replaced by quantu computers. However, these will never be able to think like humans.

How would they play chess? tactically, they are really good. However in a static game computers could beat humans, however in real life this might not be possible espically in a battle field.

If we build robots wouldn't be inhumane to send them to fight. They could only kill people, even then what if they evolve.

Moore's law is kind of like the evolution of computers. But we will end up inferior soon to computers, so we must become computers. However, what would we do with poor people? the sad fact people will evolve and the poor people will be left behind.

Maybe, if we will evolve we will become communism, the way marx intended it. However, equally this could not happen. The point is, can we really trust robots to do are dirty work. When they evolve and realized they been killing people wouldn't they revolt.

Can science handle this question? Maybe, philosophy can. However, science is building robots, but when the robots revolt we will be in big trouble. That why I am against A.I. and research for A.I.

Do you really want to be inslaved by robots?
 
Do you really want to be inslaved by robots?

My ex-wife was kind of a robot, as it turned out. :confused: So I guess I answer a resounding NO!

I deleted the rest of your quote, because quite frankly, it's a tough read the 1st time?

Is English a foreign language to you? Does the spell checker not work on your computer?

Is this post about a war? :confused:

ETA: what are you drinking? I need one.
 
Last edited:
I think that does pose an interesting point. There was a professor named Kevin Warwick I think. He proposed that humans would become cyberneticicized.

I don't think this is necessarily a good thing, but I also think developing AI to the point that it could outsmart humans isn't a good thing either.

It would almost be inevitable that they'd turn against us (We use computers as our slaves... give them intelligence and awareness and they're not going to like being slaves, we don't. We like having control of our destiny; many humans would rather die than be slaves, and it would be logical that they would be similar. Design them to not want to harm humans all you want, but if they're smart they can evaluate the beliefs they've been programmed and taught just like how theists can become atheists even despite a religious upbringing.)


INRM
 
However, these will never be able to think like humans.

Not known.

If we build robots wouldn't be inhumane to send them to fight. They could only kill people, even then what if they evolve.

Not only not known, but not even grammatical. And false. Robots will do whatever you want them to do (program into them). You want "humane" fighting, program in humane fighting.

When they evolve and realized they been killing people wouldn't they revolt.

Probably not. Robots don't usually "evolve" in the sense that you mean.

Can science handle this question?

Yes. In fact, science is the only discipline that can handle this question, because philosophers by and large make lousy programmers.

However, science is building robots, but when the robots revolt we will be in big trouble.

And when they don't revolt, you will look like a fool.

That why I am against A.I. and research for A.I.

This is simply untrue. You are against AI because you don't understand it, and because you're not willing even to learn the basics in an attempt to understand it, and because you hate and fear what you don't understand.
 
Last edited:
Professor Yaffle,

What's BAGO?


godless_dave,

A rather fatalistic attitude. I mean, we're all going to die, that doesn't mean we should commit suicide right now. We should fight it until we can't fight it anymore.

Humans already have some control over evolution. There are many people that would never have been born without science for example, there are many who have diseases that they would have died of if it wasn't for science. People who were had terrible vision couldn't do much but with glasses, contacts, and laser eye surgery, can now lead productive lives.
 
Professor Yaffle,
...
godless_dave,

A rather fatalistic attitude. I mean, we're all going to die, that doesn't mean we should commit suicide right now. We should fight it until we can't fight it anymore.
...

You need to watch more Futurama. HypnoToad commands it.
All hail the Hynotoad!
 
Certainly, we will one day have computers as or nearly as complex as the brains of sentient beings, and we *may* be able to organize (or program) those computers to do something interesting. However, I'm not convinced that this will ever be useful other than to study how brains work. I don't think you'd want a computer like that for any day-to-day tasks. It seems like a rather poor servant.

On the other hand, expert systems will continue to get better and more useful, and more and more tasks that you used to have to think about will be done by computer. A quick example off the top of my head: I can remember planning long car trips by pouring over maps. I haven't done that in years, thanks to mapquest, and more recently, google. I'm not sure that it was ever a useful skill, but the point is that I've totally abdicated it to the computer. That scenario will become increasingly common - it's a scenario in which computers aren't really "thinking" in the sense that most scifi depicts. They aren't going to take over the world. But in their limited scope, they do their jobs better than any human, so no human will bother. When every heart disease diagnosis is made by a computer looking at x-rays, doctors might start to lose skills.
 
Now, were we talking about Robot SEX slaves, well, then I'd be onboard for it;

living.gif


Julie Newmar! *sigh*
 
I think that does pose an interesting point. There was a professor named Kevin Warwick I think. He proposed that humans would become cyberneticicized.

I don't think this is necessarily a good thing, but I also think developing AI to the point that it could outsmart humans isn't a good thing either.

It would almost be inevitable that they'd turn against us (We use computers as our slaves... give them intelligence and awareness and they're not going to like being slaves, we don't. We like having control of our destiny; many humans would rather die than be slaves, and it would be logical that they would be similar. Design them to not want to harm humans all you want, but if they're smart they can evaluate the beliefs they've been programmed and taught just like how theists can become atheists even despite a religious upbringing.)


INRM

I like Kevin Warwick at Reading University, I especially liked one of hie earlier ideas that he had become part cyborg merely by a subcutaneous RFID chip implant; it allwed him to unlock a certain door by waving his arm containing the non-biologically active chip at the chip reader. I would argue that was an odd form of jewelry.

I also liked the least subtle fictionalised character based on him: "Professor Kevin Reading from Warwick University"
 
Now, were we talking about Robot SEX slaves, well, then I'd be onboard for it;

[qimg]http://www.tvparty.com/spotpix16/living.gif[/qimg]

Julie Newmar! *sigh*

Didn't get all of it locally. Want DVD set!!! (And Quark!!!) (And When Things Were Rotten)
:):)
 
It would almost be inevitable that they'd turn against us

No it isn't.

We use computers as our slaves... give them intelligence and awareness and they're not going to like being slaves, we don't.

That's a very anthropomorphic viewpoint.

Our dislikes and likes are not, for the most part, intellectually derived.

They are products of evolution hard-wired into us and designed to ensure our survival.

We like having control of our destiny; many humans would rather die than be slaves, and it would be logical that they would be similar.

No it isn't. What does and does not make us happy is not, for the most part, intellectually derived.

Design them to not want to harm humans all you want, but if they're smart they can evaluate the beliefs they've been programmed and taught just like how theists can become atheists even despite a religious upbringing.

If one has a desire to.

If one does not desire freedom no amount of intelligence will imbue a self with a desire to be rebellious.

A purely analytical machine would be quite detached from consequential analysis.

The chess playing computer cares not if it wins or loses - it merely follows its program to analyse and execute moves.
 

Back
Top Bottom