• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

AI Advice-- Not ready for Prime Time?

From the article:

Making a computer an arbiter of moral judgement is uncomfortable enough on its own, but even its current less-refined state can have some harmful effects.

Whoever imagined a computer programmed by humans, would be better at moral judgments than humans?
 
Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

https://futurism.com/delphi-ai-ethics-racist

One day it will work -- but not yet?

Maybe the problem is that there are no answers? That morality truly is subjective?

From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.
 
Djever think maybe racism IS the proper ethic? :D

What if? Sure would upset the apple cart.
 
From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.

I have the same problem with certain posters on the forum.
 
From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.

I had really hoped that AI had moved on from ELIZAWP. And it claims to have done so. Asking Google >How does AI understand text?< gets me "About 4,370,000,000 results". It is more than just matching words. Contextual "understanding" is now claimed.

But not, as I noted in the OP, apparently good enough. :(
 
Something is fishy, here. Unless the AI was trained by world renowned and widely acclaimed moral reasoners (do any such people even exist?), its opinions aren't going to be any better than the average type of human who might want advice.

So either the scientists and the reporter are total idiots, or the scientists are cynical scumbags and the reporter is an idiot, or the scientists and the reporter are all cynical scumbags. Or the scientists are prototyping something without pretending it's already fit for purpose, and the reporter is a cynical scumbag.

Whatever the actual, I don't think anyone can go wrong by assuming that reporters are total idiots, or cynical scumbags, or both.
 
I had really hoped that AI had moved on from ELIZAWP. And it claims to have done so. Asking Google >How does AI understand text?< gets me "About 4,370,000,000 results". It is more than just matching words. Contextual "understanding" is now claimed.

But not, as I noted in the OP, apparently good enough. :(

"Contextual" means they match not just the words, but the nearby words, and disambiguate based on the nearby words.

I studied AI in college in the early 1980s, and it was terrible. It was Eliza, and simple forms of Eliza at that. The idea of machine translation was being kicked around, and the basic idea was to map out the input sentences into a sort of neutral conceptual language-independent framework, and then translate that framework into the target langues. In other words, form a rudimentary understanding of the input sentence, and using that udnerstanding, form the sentences in the target.

And machine translation sucked, for a long time. One day, though, I decided to try some of the funny experiments where you translated from source language to target and back, and see how mangled the original thoughs were. I had done it before, sometimes with amusing results, but always with mangled meaning and grammar. Then, I did it again and...it worked. The "round trip" translations had very few errors. I was amazed, and I looked for information on how they did it. The answer was that they gave up any pretense of understanding words or sentences. They just fed it lots of trainging data so they knew that when one language said one thing, the other language said that other thing. There were rules to check it against, but they were patterns, not concepts.

So, trying to make ethical judgements requires actual understanding. I am confident that they are trying, but they're just matching words, and the results are terrible.
 
Come to think of it, is there really any objective way to measure how "good" moral judgments are?

If the only possible measuring stick is what a human being thinks is good moral reasoning, then "good" will be in the eye of the beholder, no?
 
Well, a computer programmed by humans is better at playing chess than the humans who programmed it (or any other humans).

Chess has clear and finite rules, which helps a lot.

AI (last I checked) is still at the phase of being trained on databases, maybe being given some rules or basic rules for extrapolating more rules from a database, and then creating requested outputs by, with some level of contextual sophistication, regurgitating something that sounds right based on what the database sounds like. So it can never have more 'quality' than its training; it's not able to actually make what we'd call leaps of logic. Just leaps of grammar.

Now, it has gotten a lot better, especially at ***********. There's at least one bot my friends like to reblog cause it's so near-human sounding, but I hate it because I keep reading a few paragraphs in before realising it doesn't actually make enough sense to be a real human thought and that's when I notice the byline.
 
Last edited:
Sounding human-like isn't even much of an achievement, and has been done LONG before modern bayesian AI learning. (Or alternately it could be argued to be the first step ever in that direction, albeit not a very useful one.) I already posted a thread long ago of how a very simple program can use markov chains to produce human-like text (to various degrees) in the style of whoever you wish, if you have enough text to train it on.

Well... aphasic human like, anyway :p

I can even give you the source code if you wish. It really is small and trivial.

Edit: just to stress: it doesn't even do leaps of grammar, or really have any idea of grammar at all. It really is just markov chains of words, organized as a tree. It has no idea what "I kissed a lovely evening with a grain of salt" means or what the grammar is, it will just produce it by looking at the last 2 words it spat out, and the conditional probabilities of whatever third word it encountered after those in the text it trained on.
 
Last edited:
What worries me is that the same type of learning is going into producing driverless cars. They may end up making similar "moral" choices in tough situations based on their training data.
 
What worries me is that the same type of learning is going into producing driverless cars. They may end up making similar "moral" choices in tough situations based on their training data.

As I keep repeating, the current machine learning isn't really true AI, and it won't do anything else than what it is programmed to do. Like, if it's optimizing a function to recognize faces, it will only ever do that. It won't decide to write a flight simulator instead. And if it's programmed to optimize driving a car, then driving a car is all it can ever do. It won't make any moral choices. It will just do what its training said is the right response in the given situation. It might crash into a school bus because of trying to avoid one pedestrian, or viceversa, but it won't be because it did a moral choice like in the famous thought experiment. It will just do it because somewhere its rules said something like avoiding crashing (to ensure its driver's safety, but it won't even actually know that; it's just the data it was fed) has higher priority than the alternative.
 
I see nothing there to contradict what I was saying. It was an AI supposed to learn to simulate certain kinds of physics, and it learned to simulate those kinda of physics. Nothing more.

In fact, it even explicitly tells you
A) what it actually does, and
B) that the new technique just does the same as the one in a paper from 2005, and all that is new is that it uses a neural network to do the simulation 30 to 60 times faster, at the cost of taking longer to train.

That's it. That's all. It didn't do anything except exactly what it was supposed to do.

It doesn't "beg to differ" with anything, except with your wanting to believe nonsense. Again.

And frankly, it would be nice if you actually had an argument you can write for a change, or indeed the comprehension of the topic to actually have one. You know, instead of wasting my time with having to watch a whole video that you misunderstood or possibly didn't even watch yourself, then track the referenced paper, which you obviously couldn't be bothered to, just to see WTH confused you this time and sent you into another flight of fantasy.
 
Last edited:

Back
Top Bottom