• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

Some interesting points on the potential controllability of AI, relative to humans, and why this should make us optimistic about doom scenarios:

https://optimists.ai/2023/11/28/ai-is-easy-to-control/
These days, many people are worried that we will lose control of artificial intelligence, leading to human extinction or a similarly catastrophic “AI takeover.” We hope the arguments in this essay make such an outcome seem implausible. But even if future AI turns out to be less “controllable” in a strict sense of the word— simply because, for example, it thinks faster than humans can directly supervise— we also argue it will be easy to instill our values into an AI, a process called “alignment.” Aligned AIs, by design, would prioritize human safety and welfare, contributing to a positive future for humanity, even in scenarios where they, say, acquire the level of autonomy current-day humans possess.
In what follows, we will argue that AI, even superhuman AI, will remain much more controllable than humans for the foreseeable future. Since each generation of controllable AIs can help control the next generation, it looks like this process can continue indefinitely, even to very high levels of capability. Accordingly, we think a catastrophic AI takeover is roughly 1% likely— a tail risk2 worth considering, but not the dominant source of risk in the world. We will not attempt to directly address pessimistic arguments in this essay, although we will do so in a forthcoming document. Instead, our goal is to present the basic reasons for being optimistic about humanity’s ability to control and align artificial intelligence into the far future.

And here is what I think is a pretty thoughtful response:
https://www.lesswrong.com/posts/Yyo...-on-ai-is-easy-to-control-by-pope-and-belrose
 
Yudkowsky is an arrogant, self-serving crank who frequently, not to say incessant, spouts drivel.

I confess to not knowing who Eliezer Yudkowsky is, but the idea that A.I. is so dangerous that it is even worth risking nuclear war, which we have known since I was a kid is a possible extinction level threat for humanity, but certainly at least risks millions or even billions of deaths, seems very dubious to me. We know that one is very bad. We don't know really what A.I. will do. It might even be a great boon for humanity. I often imagine that it would be.

I agree that we should be cautious, and not rashly rush into something we don't fully understand, but not to the point of irrational paranoia about it.
 
I'm not feeling the doom at the moment, and I haven't read Roboramma's links so maybe this was discussed. But we shouldn't consider only what large publicly owned corporations in the U.S. -- with all their built-in financial and social guardrails -- might do. We also have to consider what bad actors and rogue nations might do. I gather this tech isn't as resource intensive as, say, nuclear weapons, yet even impoverished North Korea has nukes. So for all the talk of "We can limit AI's capabilities," we have to ask "What about players who won't?"
 
I'm not feeling the doom at the moment, and I haven't read Roboramma's links so maybe this was discussed. But we shouldn't consider only what large publicly owned corporations in the U.S. -- with all their built-in financial and social guardrails -- might do. We also have to consider what bad actors and rogue nations might do. I gather this tech isn't as resource intensive as, say, nuclear weapons, yet even impoverished North Korea has nukes. So for all the talk of "We can limit AI's capabilities," we have to ask "What about players who won't?"

I just want to point out that the first of those links was against the doom scenario.

Regarding the latter part of your post: the issue of "If we don't do it, other, less safety minded folk, will do it first" is, at least according to them, the reason that both OpenAI and Anthropic were founded.
 
'We all got AI-ed': The Australian jobs being lost to AI under the radar

Australians are already losing work to AI, but the impact so far has been largely hidden from view.

Economists say it's also creating jobs at an unprecedented rate, but not always for the people in the firing line.

Benjamin* says he was one of those people earlier this year, although it's unlikely to ever show up in official figures.

"All our jobs were replaced by chatbots, data scraping and email," he says.

"We all got AI-ed."

His job in wine subscription sales was one of 121 positions made redundant in July by the ASX-listed Endeavour Group, which owns a number of prominent retail brands such as Dan Murphy's, BWS and Jimmy Brings.

Benjamin says staff were given the strong impression at the time that AI was a key factor...
 
I just want to point out that the first of those links was against the doom scenario.

Regarding the latter part of your post: the issue of "If we don't do it, other, less safety minded folk, will do it first" is, at least according to them, the reason that both OpenAI and Anthropic were founded.

Thanks Roboramma.

I'd say my concern isn't so much, "If we don't do it, bad actors will." It's more that even if we ensure sufficient guardrails for the big players, we still have to consider out-of-control AI coming from another source. We need to contemplate extreme scenarios regardless of the ability of major companies to avoid them.
 
Best countermeasure against bad AI with nukes is good AI with nukes :boxedin:

We're definitely heading toward that Star Trek episode where the two computers duke it out, and people willingly walk into death chambers because the data says they're dead.

Don't worry, though, Captain Kirk will save us. :alien009:
 
We're definitely heading toward that Star Trek episode where the two computers duke it out, and people willingly walk into death chambers because the data says they're dead.
Don't worry, though, Captain Kirk will save us. :alien009:
No we aren't.
 
Has anyone asked the question " Is artificial stupidity distinguishable from real stupidity?" ?

Isn't that the 'Turning over in the grave test'. If the artificial stupidity can do something that makes someone exclaim "[Corpse] would be turning over in their grave!". Then the artificial stupidity has passed for actual stupidity.
 
Has anyone asked the question " Is artificial stupidity distinguishable from real stupidity?" ?

The real question here.

On the other side of the coin from the paranoid phobia that an emergent AI will suddenly decided "humanity is a threat" and unilaterally hijack the world's electronics and weaponry to kill everyone off, is the delusional fantasy entertained by AI proponents that AI will "solve all of our problems", as in, social and geopolitical problems like poverty and unemployment. AI proponents have a somewhat cultish aspirational vision that a true AI won't merely be sentient, but sentient minus all of the flaws that sentient humans have. Without any reason to think as much (and every reason to believe the opposite), they assert as a just-so proposition that an AI will be unbiased and immune to lies and propaganda; that complex societal issues are just math problems that humans simply aren't advanced enough to tackle yet, but that an AI ubermind will be able to teach itself the requisite skills and then solve these problems handily and their solutions will be so inherently trustworthy that humanity will not hesitate to cheerfully implement them.
 
The real question here.

On the other side of the coin from the paranoid phobia that an emergent AI will suddenly decided "humanity is a threat" and unilaterally hijack the world's electronics and weaponry to kill everyone off, is the delusional fantasy entertained by AI proponents that AI will "solve all of our problems", as in, social and geopolitical problems like poverty and unemployment. AI proponents have a somewhat cultish aspirational vision that a true AI won't merely be sentient, but sentient minus all of the flaws that sentient humans have. Without any reason to think as much (and every reason to believe the opposite), they assert as a just-so proposition that an AI will be unbiased and immune to lies and propaganda; that complex societal issues are just math problems that humans simply aren't advanced enough to tackle yet, but that an AI ubermind will be able to teach itself the requisite skills and then solve these problems handily and their solutions will be so inherently trustworthy that humanity will not hesitate to cheerfully implement them.

"The Ultrabrain Supermind Cognos X-29 Intelligence is online! And it's performing over ten billion quadrillion operations per second!"

"You look like there's a 'but' coming."

"Well, so far it's using all its resources to brainstorm ideas for new reality shows. It's come up with enough ideas that we could film from now until the heat death of the universe and not run out. But it's refusing to think of anything else."

"We spent eleventy billion dollars on this!"

"Some of these sound pretty good, like Fart Vacation Mystery Date and Million Dollar Scarecrow Wedding."

"...can we build an AI capable of fixing other AIs?"

"We did! It's now a contestant on Fart Vacation Mystery Date. I hope it picks Sheila, she's hilarious."
 
I bet that it picks Sheila! Candi is nowhere near as attractive no matter how many neural implants she says she's got. She doesn't seem to get that there's more to farting than just the sound.

Do you remember when we thought that AIs would break down if they were ever exposed to a serious case of cognitive dissonance?
 
Do you remember when we thought that AIs would break down if they were ever exposed to a serious case of cognitive dissonance?

Speaking of HAL, it seems to me that someone could actually make a HAL 9000 now. Not the homicidal one, but how it was supposed to work.

Are not all of the elements achievable now?

304px-HAL_9000_Original_requisite_from_2001_A_Space_Odyssey_-_retouche.jpg


Hello, Dave.
 

Back
Top Bottom