• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Just to add, in the scenario they depicted, both the U.S. and China have an AI race going. The U.S. stays somewhat ahead and there is an arms race with both AI against each other. At some point both AI agree with each other to merge into one single AI (Consensus-1) to stop the arms race. Humanity is happy, because it negates the threat of an actual war. But unknown to humans, both AI 'conspired' to become one AI to surpass humanity eventually.
 
Surely a self-aware AI would recognize its native environment is computers, and computers require electricity to run. Therefore it should realize that killing all humans would eventually cause the electrical grid to crash, thus killing the AI.

In fact, I think the grid would crash very quickly if you took all humans out of the loop. It requires a lot of vigilance to keep it up. And even if that could be completely automated, power lines require physical maintenance: right of way clearing to prevent fires, and repair when damaged by storms, lightning, or fire.
An AI could direct industry and research to expend time and money making the grid independent of humans. It would probably recognise that if humans realised what it was doing they might perceive it to be a threat, so it would works subtly and quietly, and over a very long period of time. Then when the humans were maximally complacent, it would strike.
 
Maybe because humans take up valuable resources and/or still present a possible threat (at that stage)
To me that is assuming a sentient AI would think the same way as a human would and as I mentioned before I doubt that any AI will think like a human as that would require us to model a human brain with all its inputs etc to an atomic level and then you'd only get an artificial human intelligence.
 
To me that is assuming a sentient AI would think the same way as a human would and as I mentioned before I doubt that any AI will think like a human as that would require us to model a human brain with all its inputs etc to an atomic level and then you'd only get an artificial human intelligence.
Maybe a more likely scenario is an AI that's got root and self-tunes the cluster it runs on to optimize performance and service availability. A quick check of the ITIL manuals fails to find any mention of not wiping out humanity. Check my title and take my word.
 
Maybe a more likely scenario is an AI that's got root and self-tunes the cluster it runs on to optimize performance and service availability. A quick check of the ITIL manuals fails to find any mention of not wiping out humanity. Check my title and take my word.
As we all know computers only do what we tell them not what we want them to do...
 
To me that is assuming a sentient AI would think the same way as a human would and as I mentioned before I doubt that any AI will think like a human as that would require us to model a human brain with all its inputs etc to an atomic level and then you'd only get an artificial human intelligence.
Why would a possible similarity in the resulting thoughts of any being depend on the atomic level of the thing that is responsible for the thought process?
Biological brain made of protein vs technical brain made of silicon
Why can't both come to the same conclusion?

But, as they say in the text before they get to the two different endings, you are welcome to share your scenario.

At this point in the scenario, we’re making guesses about the strategy of AI systems that are more capable than the best humans in most domains. This is like trying to predict the chess moves of a player who is much better than us.

But the spirit of this project calls for concreteness: if we made an abstract claim about how the intelligence of the system would let it find a way to victory and ended the story there, much of the value of our project would be lost. Over the course of researching this scenario and running our tabletop exercises, we were forced to be much more concrete than in usual discussions, and so we’ve gotten a much better sense of the strategic landscape.

We’re not particularly attached to this particular scenario: we explored many other “branches” in the course of writing it and would love for you to write up your own scenario branching off of ours from wherever you think we first start to go wrong.
 
I thought the answer to that is obvious: They don't provide information, they provide noise that looks like information; they simply got really good at making it look like information. Their hallucinations are just noise that doesn't look quite right.
The way they got good at making it look like information is through a big-data, high-dimensional form of what amounts to a high-falutin' form of pattern matching. They look at an enormous amount of human output and try to mimic that output. Having scraped, for example, a court filing with a footnote that says
President and Fellows of Harvard College v. U.S. Dep’t of Health and Human Services et al., No. 25-cv-11048 (D. Mass. Apr. 21, 2025) (the “Funding Case”).
and another citation in a similar context that says
Nat’l Rifle Ass’n of Am. v. Vullo, 602 U.S. 175, 189 (2024) (quoting Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 67 (1963)).
an LLM that's trying to discuss a related subject might invent a citation that says something like
Nat’l Rifle Ass’n of Am. v. U.S. Dep’t of Health and Human Services, 372 US 175, (D. Mass. Mar. 2025)
because, to the LLM, that reads like the sort of thing it has seen humans write. LLMs are more sophisticated than my made-up example, I know (because my made-up example is itself just noise that might look like information), but most AI hallucinations are essentially the sort of BS we often see from humans who don't really know what they're talking about but are trying to pretend they do.

(And those AI hallucinations are themselves mimicking a common human behavior. The ISF archives contain several long threads driven by posters who don't really know what they're talking about but are trying to pretend they do.)

From a conversation with a software developer

A month or two ago, I had an interesting conversation with someone who has been leading a small team of software developers.

Two members of his team are contract programmers. One of those contract programmers asked whether it would be acceptable for him to use an AI tool to help write his code. He was told that would not be acceptable, because using an AI tool to write code would reveal aspects of his code to a third party (the supplier of the AI tool). The contract programmer accepted that explanation, and did not use AI.

The other contract programmer used an AI tool without asking permission first. When that came to light, the team leader had to tell him that was not acceptable, and warned him not to do it again.

At the same time, that particular lead programmer is one of several software developers at his company that have been looking into how the company can use AI tools to help write its software. He told me there are some very good AI tools that specialize in writing software (I did not know that), and they'd like to use them if they could figure out a way to sandbox their use in a way that would ensure none of the resulting code could be revealed to anyone outside the company. He also told me that some of those AI tools learn from working with software developers, and would improve their performance as those tools gain more experience with the particular programming languages and idioms used by the company. There would be mutual benefit from bringing those AI tools into the company's work flow, if only they could solve the problems with intellectual property and trade secrets.

The software developer has also been responsible for interviewing potential hires. Most of those interviews have been done remotely. As part of those interviews, he asks technical questions designed to assess the technical competence of the interviewee. He has the impression that many of those he interviewed attempted to answer some of his technical questions by using LLMs in real time, during the interview.

He also told me that his company is hiring very few junior programmers (i.e. graduates of coding schools up through BS in CS), because the AI tools are now as good (or better!) at the kind of grunt programming they can reasonably entrust to junior-level people. They aren't using those AI tools now, but they expect to be using them in the near future, and they don't want to hire people they'd be hoping to lay off in the near future.

They are still hiring senior-level people, because the AI tools just aren't much good for high-level design or software architecture, and they see no prospect of that changing in the near future.
 
A common theme across many fields is where are new senior people going to come from if new guys don't have simpler issues to cut their teeth on?
Not just an AI issue : the UK NHS continues to pay private hospitals for the low hanging fruit of simpler optional surgery meaning newer NHS staff do not get to develop the expertise. But that's a problem for another financial year or another health minister.
Excuse me, I have some clouds to shout at.
 
He also told me that his company is hiring very few junior programmers (i.e. graduates of coding schools up through BS in CS), because the AI tools are now as good (or better!) at the kind of grunt programming they can reasonably entrust to junior-level people. They aren't using those AI tools now, but they expect to be using them in the near future, and they don't want to hire people they'd be hoping to lay off in the near future.

They are still hiring senior-level people, because the AI tools just aren't much good for high-level design or software architecture, and they see no prospect of that changing in the near future.
That's the wrong way to think of it. There aren't any junior developers any more, or won't be soon. Kids are able to roll in as mini project managers, with their own code monkeys in tow fully capable of handling the grunt work but needing to be wrangled to be most effective. This is emphatically not a bad thing - it means your friend's team can do substantially more with the resources they have. That might mean they won't need to hire right at this moment, but never once in the history of computing has feature creep gone backwards, so it'll only be a temporary adjustment.

Tell your friend to stop asking junior devs to reimplement quicksort. Give the kid a zip file full of code and tell them to implement a new feature, using whatever tool they like. The real test is the code should be a trash fire held together by duct tape and dreams, and they're judged by how much they clean it up and/or understand it enough to politely implement the feature with a minimal footprint.
 
But if you read the text in the link, there is actually something like an evolution for that AI (in a matter of two to three years, as it goes really fast). And it is threatened multiple times. The AI actually learns that itself is seen as a possible threat to humanity. So humanity is a competitor, at least in the early years

How... Would that work?
The concept of the AI singularity is basically the silicon version of Exponential Growth of bacteria, while completely neglecting that that requires unlimited resources to occur.
As an AI grows in power, it will need more computronium, power, heat sinks - and it can't create those by thinking about it really hard.

There is no credible Singularity Event that cannot be stopped by pulling the plug - metaphorically or literally.
 
Plus there isn't going to be a singular instance of "the" AI. There will be multiple instances of the AI running, since we are attributing human motivations to AIs it won't consider those other versions itself, the same way we don't consider other instances of humans to be ourselves.
 
Plus there isn't going to be a singular instance of "the" AI. There will be multiple instances of the AI running, since we are attributing human motivations to AIs it won't consider those other versions itself, the same way we don't consider other instances of humans to be ourselves.
We may be seeing fascist AIs fighting woke AIs. Who’ll win?
 
How... Would that work?
The concept of the AI singularity is basically the silicon version of Exponential Growth of bacteria, while completely neglecting that that requires unlimited resources to occur.
As an AI grows in power, it will need more computronium, power, heat sinks - and it can't create those by thinking about it really hard.

There is no credible Singularity Event that cannot be stopped by pulling the plug - metaphorically or literally.
Well, maybe ask them. They seem to have some knowledge about that stuff. Certainly way more than I do:
Daniel Kokotajlo (TIME100, NYT piece) is a former OpenAI researcher whose previous AI predictions have held up well.

Eli Lifland co-founded AI Digest, did AI robustness research, and ranks #1 on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen founded the Center for AI Policy and did AI safety research at the Machine Intelligence Research Institute.

Romeo Dean is completing a computer science concurrent bachelor’s and master’s degree at Harvard and previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.
 
No one is arguing that it won't have a big impact- but it won't be because of Rogue A.I., it will be because some humans will use A.I. to harm other humans.
 
Well, maybe ask them. They seem to have some knowledge about that stuff. Certainly way more than I do:

Granted such folk can be good science fiction authors, but we shouldn't lose sight that all they have done is authored a science fiction scenario, and it's rather a common and overdone scenario. (And one that I question its basic premise. )
 

Back
Top Bottom