• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

I suspect DOD usage of AI will be more like using it to pour over trillions of pixels of satellite data and video footage to flag potentially significant items than to train it to answer customer service questions for people calling the Wendy's complaint line. Not so much replacing a person as carrying out a task a person cannot perform.

The examples he gave were an AI drafting a battle plan that will be reviewed and refined (and corrected) by expert humans, which requires a lower standard of reliability, and an AI making kill decisions on the battlefield, which requires a much higher standard of reliability.
 
The examples he gave were an AI drafting a battle plan that will be reviewed and refined (and corrected) by expert humans, which requires a lower standard of reliability, and an AI making kill decisions on the battlefield, which requires a much higher standard of reliability.

Oh, so exactly the same as Wendy's, then.
 
From Bing chat "I’m sorry, but I’m not sure what you’re asking. Could you please provide more context or rephrase your question? [emoji4]"
That's surely a better answer, but here's what Bing Chat gave me when I posed the same question to it (as best as I could remember):

"I’m not sure what you’re asking. Are you asking which of the two is more likely to be green: a purple hat or a blue smoothie? If so, a blue smoothie is more likely to be green if it contains kale or spinach. These leafy greens are often used in smoothies and can give them a green color. However, if the smoothie does not contain these ingredients, it may not be green at all. On the other hand, a purple hat is unlikely to be green, as it is typically purple in color."

This is pretty similar to the word paté that ChatGPT gave me. It doesn't understand that the contingencies that can make a smoothie green are irrelevant if you already know a smoothie is blue. Which is just a corollary of the fact that it doesn't understand much of anything.

And while I chose this example because I was pretty sure it would get it wrong, having seen how it gets it wrong, I can produce a simpler example (also from Bing Chat):

"Can a blue smoothie be green?"

"Yes, it is possible for a blue smoothie to appear green. [reasons why a smoothie can be green or blue, but not reasons that a blue smoothie can be/appear green]"
 
Last edited:
What is the "right" answer? I would say there isn't a single one right answer to the query. It's not like a maths question.

For that query what we are looking for is "how would a human answer it?" I suspect that if you ask a spectrum of humans you'd be surprised at the range of answers you'd get.
 
What is the "right" answer? I would say there isn't a single one right answer to the query. It's not like a maths question.

For that query what we are looking for is "how would a human answer it?" I suspect that if you ask a spectrum of humans you'd be surprised at the range of answers you'd get.
It's less about what the "right" answer is and more about whether the answer is reasonable. Telling me that a blue smoothie can be green because smoothies in general can be green isn't reasonable.
 
What I want to see is an AI using sarcasm or calling out internet circle-jerks.

"Hurr durr in Japan they don't distinguish between blue and green!" would be an awesome response for a chatbot to give to the question.

Ask a chatbot to look at r/books and answer if anyone else doesn't "get" The Old Man and the Sea, and it'll give you a totally sincere, "yes, that is correct based on my review of the corpus."

Ask a human the same question, and you might receive, "oh, yay, this question again." An answer along those lines would go a long way towards convincing me that LLMs actually think and feel.
 
Can't even make logical inferences, as far as I know. Doesn't stop people from insisting that they are possessed of the ineffable qualia of sentience.

AIs are oracles of wisdom!


Literal oracles, like the ancient Greek ones, where some lady would get high off volcanic gases and people would accept her ravings as truth.
 
I thought it was generally known that LLMs can't make logical deductions and therefore don't?

What kind of deduction ? They can do summary of an article, find bugs in code, they clearly can deduce at least a little. Question is how much, how many steps, how obscure connection ..

Just asked Bard what coudl cause puddle of water under fish tank .. and he gave me extensive list of reason how water can get out of fish tank. But first that's easy connection, and second it's something you can simply google answer for. There is ton of text talking about it.

The test has to be something obscure, which is not likely to be simply found ..
 
Last edited:
I question the assumption that LLMs are deducing anything.

Does a compiler deduce anything about the code it's compiling? Does a unit test deduce anything about the function it's testing?

Does a pattern recognizer do anything besides rote pattern analysis and mechanically reporting its results?
 
I question the assumption that LLMs are deducing anything.

Does a compiler deduce anything about the code it's compiling? Does a unit test deduce anything about the function it's testing?

Does a pattern recognizer do anything besides rote pattern analysis and mechanically reporting its results?

I ask what you mean by "deduction".
 
I mean it can do some basic stuff .. question is how more it can do .. and how to test it, and how to stay away from examples it could find in training data.

By basic stuff I mean something like this (yeah, he's a bit talkative, but I want to leave all the ballast in):

lets have abstract facts .. object A has property B and object C has property D

Sure. Here are some examples of abstract facts of the form "object A has property B and object C has property D":

Object A is a parent of object B.
Object A is a child of object C.
Object A is a friend of object B.
Object A is an enemy of object C.
Object A is similar to object B.
Object A is different from object C.

These are just a few examples, and there are many other abstract facts that can be expressed in this form. The specific facts that you can express will depend on the context of your conversation.

Here are some more concrete examples of abstract facts, along with their interpretations:

"The apple is red and the banana is yellow."
"The book is heavy and the cat is light."
"The dog is furry and the bird is feathery."
"The sun is hot and the moon is cold."
"The tree is tall and the flower is small."

These facts are all examples of how abstract properties can be applied to concrete objects. The property "red" can be applied to the apple, the property "yellow" can be applied to the banana, and so on.

I hope this helps!

now let's say that any object that has property B cannot have property D at the same time

Yes, that's a valid statement. It means that the two properties are mutually exclusive or contradictory. In other words, an object can have either property B or property D, but not both.

Here are some examples of this:

An object cannot be both alive and dead.
An object cannot be both inside and outside a room.
An object cannot be both solid and liquid.
An object cannot be both square and round.

These examples show that there are many different pairs of properties that are mutually exclusive. The specific pairs will depend on the context, but the basic concept is always the same: an object cannot have two contradictory properties at the same time.

I hope this helps!

can the object C have property B ?

No, object C cannot have property B because property B and property D are mutually exclusive. This means that an object can have either property B or property D, but not both.

Since object C has property D, it cannot have property B. This is because having both property B and property D would be contradictory.
 
It's less about what the "right" answer is and more about whether the answer is reasonable. Telling me that a blue smoothie can be green because smoothies in general can be green isn't reasonable.
I still say you'd need to compare its answer to the answers given by humans to see if the answer was "reasonable" or not. These AIs are being proposed to replace humans so it is how well they reproduce/mimic what a human would say that is the criteria used to evaluate their usefulness.

ETA:

Must remember these are not meant to be nor claimed to be general AIs. Their real world uses include such things as answering a customer's query about their order status, how to make a payment for their energy bill, technical support. And now apparently writing legal decisions for British judges. :(
 
Last edited:
Ask a human the same question, and you might receive, "oh, yay, this question again." An answer along those lines would go a long way towards convincing me that LLMs actually think and feel.
Though of course it could have been trained to trick you into thinking that it thinks and feels. We would have no way to tell the difference.
 
I still say you'd need to compare its answer to the answers given by humans to see if the answer was "reasonable" or not.
Why? There doesn't seem to be any purpose to this exercise. If everyone in the world agreed it was reasonable, it would still be unreasonable.

Must remember these are not meant to be nor claimed to be general AIs.
The reason I gave ChatGPT that prompt is because the claim was made that we already have AI capable of doing everything that HAL from 2001 is depicted as being capable of. HAL's pretty clearly meant to be a general intelligence, capable of abstract reasoning, and one that's introduced as being virtually incapable of making errors (a set up for a fall, of course, but one that's taken seriously by the people in the film).

People underestimate how much of a problem 'hallucination' is. It's not something that's going to get fixed with improved LLMs (or other single mode models), because it's a fundamental problem with current approaches. We're getting very good at barking up the wrong tree.
 
I suspect the next big thing in AI will be "value loading". It's already being brought up in newer science fiction (and in some older SF too). Not just rote pattern recognition and response, but pattern evaluation according to some independently-arrived-at value system. "This pattern matches common patterns in my corpus, but is the wrong conclusion because [reasons]." And there will be lengthy debate about the kinds of [reasons] we should load into an AI's value system.

The obvious application (and conundrum) is selection of harm by self-driving cars. Each individual human is going to come to their own individual conclusion about whether to run into a tree or run over a dog. Whose values do you load into a car-driving AI? Is the DOT competent to set a reasonable standard value that manufacturers must load? Do we do decades-long studies of families with children, and examine the moral frameworks those children express in adulthood? Should cars in Japan get a different value load than cars in the UK? Etc.
 
The obvious application (and conundrum) is selection of harm by self-driving cars. Each individual human is going to come to their own individual conclusion about whether to run into a tree or run over a dog. Whose values do you load into a car-driving AI? Is the DOT competent to set a reasonable standard value that manufacturers must load? Do we do decades-long studies of families with children, and examine the moral frameworks those children express in adulthood? Should cars in Japan get a different value load than cars in the UK? Etc.

This is an academic problem that doesn't apply to real life, though. In real life, a driver who sees an imminent collision with no completely clear way to dodge it instinctively slams on the brakes to try to stop or at least slow down, and if there isn't enough braking room to avoid the collision then they collide. People are hit by drivers all the time, and outside of a terrorist attack or something it's never because the driver actively made a choice to hit those people instead of hitting something else. By and large, if an unavoidable collision situation develops suddenly, the driver isn't going to have time to make such a choice, and neither will an AI. The "superpower" of an AI car is that it will be able to avoid avoidable accidents by just paying attention all the time and slowing down or stopping in plenty of time.

So no, I don't think an AI car has to be loaded with "values" about what things it's less-okay to run over. That's useless information. An AI should be programmed to not run over anything intentionally and that's it.
 

Back
Top Bottom