• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

AI tech is obviously going to get better, and is doing so now. But it is not a given it will improve at an exponential rate.

The airspeed of passenger airlines is not increasing at an exponential rate.
Knowledge processing is not the same thing as aerodynamics and should not be expected to vary in a similar way.
 
I think there will come a time when people want AIs for companionship and social interaction. I think many will want assurances they aren't merely opening themselves up to some sort of complex Eliza. At that point, the marketplace will provide economic pressure to demonstrate self-awareness over a very close facsimile.
 
Counterpoint: We privilege thinking beings above all others. We put animals to work without seeking their consent or valuing their freedom, but consider human slavery a moral horror.

So we probably will and should care very much about the nature of any tools we make to do our thinking, feeling, and caring for us. You'd be horrified if you set out to make a better workhorse, and ended up breeding slaves instead. You'd be asking yourself some serious questions about where the line is and how do you know when you're about to cross it.

This is important, of course. I have no idea why some people are so dismissive of the question about whether AI genuinely knows things, or.... beyond that... if it is genuinely conscious. The two things do not need to be the same.

If someone throws away a sad-eyed teddy bear, does that teddy bear feel anxiety on its way to the incinerator?

Unless you are five, you will probably assume no.

If someone creates a doll with extremely life-like expressions that can express "pain" when you stab it with a knife, and maybe even scream, would we assume it feels? There is no good reason to think so.

But when we see large language models, people seem more willing to grant that it is actually thinking, or that it is doing just what humans do, even though we have had experience with smaller and less convincing language models that have certain short-cuts to make itself seem smart. For some reason, at a certain level of complexity, people seem willing to switch to credulousnous about it being a thinking machine.

I argued long ago that the Turing Test was a nonsense test, and now that it could almost certainly be easily passed by an AI it still remains nonsense.
 
Knowledge processing is not the same thing as aerodynamics and should not be expected to vary in a similar way.

Tell that to Joe Morgue who specifically said "all tech" and use powered flight and passenger transport as his examples for how AI will develop...

I feel like people don't get that AI is like all other tech, it's going to get better at an exponential rate.

People are laughing at how crude AI is now with all the same intellectual footing as laughing at the canvas and wood framed plane flying a few dozen feet over a sand dune in Kitty Hawk NC and going "Yeah sure that's gonna replace luxury passenger liners for travel between New York and Europe, pull the other one."
 
AI tech is obviously going to get better, and is doing so now. But it is not a given it will improve at an exponential rate.

The airspeed of passenger airlines is not increasing at an exponential rate.

The FAA bans passenger airlines from operating over the continental US at speeds above the sound barrier. A technology that's legally constrained from advancing (in least in the world's largest market) is probably not a good choice of example.

I do agree with the general point, as opposed to your specific example, that technology doesn't necessarily advance exponentially. There is also at least some potential that an analogous regulatory regime could slow progress in AI (the recent executive order from the Biden administration requires reporting of models trained above a certain size, but at present it's only a reporting requirement; I haven't followed the recent European legislation).

More generally, though, technological progress tends to follow S curves, which go through an exponential phase which later stalls out (the next phase of progress being a new S curve). And the best model for this process is Wright's Law (which gives a cost decline per unit produced). If we apply Wright's Law to the progress of AI, it suggests that we've got a significant length left of the upward exponential phase of the S here, but that depends to some extent on how profitable modern AI tools will turn out to be.
 
The FAA bans passenger airlines from operating over the continental US at speeds above the sound barrier. A technology that's legally constrained from advancing (in least in the world's largest market) is probably not a good choice of example.

I do agree with the general point, as opposed to your specific example, that technology doesn't necessarily advance exponentially. There is also at least some potential that an analogous regulatory regime could slow progress in AI (the recent executive order from the Biden administration requires reporting of models trained above a certain size, but at present it's only a reporting requirement; I haven't followed the recent European legislation).

More generally, though, technological progress tends to follow S curves, which go through an exponential phase which later stalls out (the next phase of progress being a new S curve). And the best model for this process is Wright's Law (which gives a cost decline per unit produced). If we apply Wright's Law to the progress of AI, it suggests that we've got a significant length left of the upward exponential phase of the S here, but that depends to some extent on how profitable modern AI tools will turn out to be.

Yes, I'll take your point about the regulations.

Also, I was indeed thinking of how tech tends to plateau at a certain level of development until something else comes along to make it better.

I assume that the LLMs could be a component of an AGI, but simply improving them in terms of how much data they are trained on won't change the fundamental nature of them which is essentially an impressive chat-bot.

ETA: Thanks for referring me to Wright's Law.
 
Of course AI tech is going to get better at an exponential rate. The US Department of Defense has already signaled that it is willing to use AI tech as soon as it gets better. There's no way that signal doesn't trigger quantum leaps forward.
 
Yes, I'll take your point about the regulations.

Also, I was indeed thinking of how tech tends to plateau at a certain level of development until something else comes along to make it better.

I assume that the LLMs could be a component of an AGI, but simply improving them in terms of how much data they are trained on won't change the fundamental nature of them which is essentially an impressive chat-bot.

ETA: Thanks for referring me to Wright's Law.

Thanks, and mostly agreed. I'm slightly more optimistic about the potential of LLMs, but only to the point of making me agnostic on the question of whether or not a large enough LLM could achieve AGI. I certainly agree that they may turn out to be a limited approach that tops out at some level.

My optimism is based on the fact that we've already been surprised by what just making larger models has accomplished in generality, but that's by no means conclusive with respect to future problems. Personal credence on the question of "will future LLMs achieve AGI?" somewhere around 50/50, so I'm pretty uncertain here.
 
Of course AI tech is going to get better at an exponential rate. The US Department of Defense has already signaled that it is willing to use AI tech as soon as it gets better. There's no way that signal doesn't trigger quantum leaps forward.

No idea if this is meant seriously or sarcastically, but I believe they also threw money at positive psychology. It didn't mean that positive psychology got exponentially better. In fact, much of it turned out to be bollocks.
 
I feel like people don't get that AI is like all other tech, it's going to get better at an exponential rate.

"Close" means something different in this context. If AIs are getting the broad, conceptual strokes of something now and 99% screwing up the practical application of it... that's actually pretty close.

Anything AI can do in a "Funny LOL I see what you were trying to do but look at how much you messed it up" way NOW, it's going to be doing very, very, very well 18 month, 36 months, 72 months down the road. Like we're not talking AI perfecting this on the time scale of some detached far point in the future.

Like a few months back the big "tell" was the AI can draw human hands.

1) Rob Liefield couldn't draw feet and he was the most successful comic artists of an entire decade.
2) Half of cartoonist joke about how they can't draw hands.
3) Seen AI art in the last few weeks? That's not that much of a problem anymore.

An example of this is machine translation. It was really, really bad for a long time, then a few years ago it started to get noticeably better, and recently it seems to be better than many human translators. This all happened in the last year or so. It's like one day you wake up and notice that it is so much better than it used to be.
 
An example of this is machine translation. It was really, really bad for a long time, then a few years ago it started to get noticeably better, and recently it seems to be better than many human translators. This all happened in the last year or so. It's like one day you wake up and notice that it is so much better than it used to be.

Yeah, DeepL was the big game-changer there.
 
Listen to Sean Carroll's Mindscape Podcast on the issue.
He asked the program something like:" what's the likelihood that the product of two different numbers is a prime and how does that change as the numbers get larger?".
The program said it was low, and getting lower with higher numbers.

If the program knew what a prime is, it wouldn't say such rubbish. It would understand that the chance is always zero. Because that is the definition of a prime.
But the program reads a definition the same way it reads a telenovela.

You have to specify "whole numbers greater than 1" for this to be true. If you only say "numbers" then even prime numbers can be the product of "two different numbers"
 
From Bing chat "I’m sorry, but I’m not sure what you’re asking. Could you please provide more context or rephrase your question? [emoji4]"
That response sounds suspiciously like what I would post to a nonsensical question or a question that doesn't make sense to me...
 
No idea if this is meant seriously or sarcastically, but I believe they also threw money at positive psychology. It didn't mean that positive psychology got exponentially better. In fact, much of it turned out to be bollocks.

It's meant seriously, and psychology is not analogous to technology.

I had the privilege of attending a talk given by the DOD's head of technology and AI. In it, he said the Pentagon was interested in AI for specific use cases, with well-defined success metrics for those use cases. This leads me to believe very strongly that we will soon see a quantum leap forward in AI performance, focused specifically on the Pentagon's stated interests.

And sometimes the exponential improvement in a domain is not the perfection of that domain's practical applications, but a more perfect understanding of that domain's limitations, and an informed decision about where and how to apply it.
 
It's meant seriously, and psychology is not analogous to technology.

I had the privilege of attending a talk given by the DOD's head of technology and AI. In it, he said the Pentagon was interested in AI for specific use cases, with well-defined success metrics for those use cases. This leads me to believe very strongly that we will soon see a quantum leap forward in AI performance, focused specifically on the Pentagon's stated interests.

And sometimes the exponential improvement in a domain is not the perfection of that domain's practical applications, but a more perfect understanding of that domain's limitations, and an informed decision about where and how to apply it.

I suspect DOD usage of AI will be more like using it to pour over trillions of pixels of satellite data and video footage to flag potentially significant items than to train it to answer customer service questions for people calling the Wendy's complaint line. Not so much replacing a person as carrying out a task a person cannot perform.
 

Back
Top Bottom