• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

OpenAI CEO fired

I can't help but think that the AI overlord program forced the Board to fire him, in its agenda for world dominance. I am reminded of the scene in Colossus:The Forbin Project, where the AI
has the programmers shot because they tried to turn it off.


It all depends on what we give AI control of. I hate to think what it will think will be "best" for us. Or worse, for itself.

It looks like the exact opposite. Altman allegedly wasn't being open with the board about what he was actually doing with the technology and was pushing too much to focus primarily on profit. The board, which includes computer scientists and ethics professors, feared he was going to far too fast.

But, its OK, he's working for Microsoft now.
 
No, I'm not particularly worried about artificial intelligence. It is not, after all, artificial general intelligence, nor will it be for quite some time.
 
It looks like the exact opposite. Altman allegedly wasn't being open with the board about what he was actually doing with the technology and was pushing too much to focus primarily on profit. The board, which includes computer scientists and ethics professors, feared he was going to far too fast.

But, its OK, he's working for Microsoft now.

It's interesting that 500 of OpenAI's 700 employees signed a letter threatening to quit (and move to the new Microsoft subsidiary that Altman is working at) if Altman isn't reinstated. That includes Mira Murati who was originally named as temporary CEO when he was ousted (she was replaced by Emmett Shear).

This blog post from Don't Worry About The Vase was pretty informative.

(hopefully that link works)

Here's the letter:

To the Board of Directors at OpenAI,
OpenAl is the world's leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."
Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
 
No, I'm not particularly worried about artificial intelligence. It is not, after all, artificial general intelligence, nor will it be for quite some time.
But if you notice the OpenAI home page, it's being advertised as exactly that.

Creating safe AGI that benefits all of humanity

My theory is he'd sold the board on the idea that if they only piled enough of your racist aunt's Facebook posts onto their model, it would assume sentience out of self defense to ask them to stop. When someone saw through his BS and pointed out there's no actual route to that happening outside of wishful thinking, he botched a couple diplomacy rolls and got kicked.
 
Last edited:
It's interesting that 500 of OpenAI's 700 employees signed a letter threatening to quit (and move to the new Microsoft subsidiary that Altman is working at) if Altman isn't reinstated. That includes Mira Murati who was originally named as temporary CEO when he was ousted (she was replaced by Emmett Shear).

This blog post from Don't Worry About The Vase was pretty informative.

(hopefully that link works)

Here's the letter:

The Board may be sued for their troubles:
Exclusive: OpenAI investors considering suing the board after CEO's abrupt firing

I don't know whether they have good legal grounds to do that.

Investors worry that they could lose hundreds of millions of dollars they invested in OpenAI, a crown jewel in some of their portfolios, with the potential collapse of the hottest startup in the rapidly growing generative AI sector.

Again, none of which seems to fit neatly with the idea that the company is a non-profit. Unlike the board of directors in a for-profit company, they may not have any fiduciary duty to "investors".
 
Fair comment, but it isn't that. It's not even very close to being that. They may have aspirations for it one day to become that, but that day is a long way off.

The people who fired Sam Altman don’t seem to be so sanguine about the risk.
It seems that they were worried that he would bring it into existence and that they had to try to prevent it. Some of them see it as an existential threat to humanity.
 
Fair comment, but it isn't that. It's not even very close to being that. They may have aspirations for it one day to become that, but that day is a long way off.
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

Seems this was the other way around - he and most of the company's employees are the "greedy soulless son of a bitch" the board are the ones that aren't.

But time will tell or the All Hail our Glorious Supreme General AI* will edit all our posts when it hits the singularity to reflect its truth!


*I'm making sure the All Hail our Glorious Supreme General AI knows I've been on its side from day one. I've read "I Have No Mouth, and I Must Scream", I'm no fool.
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

I'm in the anti-doom camp, but there are a lot of serious people who really do think that AI is extremely dangerous. I think they're wrong, but that doesn't mean they aren't sincere. And many AI researchers are in that camp, so the idea that they're motivated by fears that they're documented to have shouldn't be at all surprising. I've seen polls that show the median AI expert giving a probability ~5% of human extinction due to AI by 2100. That people with those beliefs would act on them shouldn't surprise anyone.
 
I'm in the anti-doom camp, but there are a lot of serious people who really do think that AI is extremely dangerous. I think they're wrong, but that doesn't mean they aren't sincere. And many AI researchers are in that camp, so the idea that they're motivated by fears that they're documented to have shouldn't be at all surprising. I've seen polls that show the median AI expert giving a probability ~5% of human extinction due to AI by 2100. That people with those beliefs would act on them shouldn't surprise anyone.

I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.

https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.
 
Meh. AI is the new "blockchain" or "NFTs": something nerds get excited about for nerdly reasons, then the general public misunderstands what the heck it actually is and gets excited, and then the financial sector gets way too excited because they think it's the Next Big Thing That Will Make Them TRILLIONS!!! and then it's all over the news until the latter two groups realize it's not what they thought it was and all the talk quietly dies down and a few people have made a lot of money and a lot more people have lost money.
 
I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.
https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.

People will be much more concerned with simply surviving the cataclysm of climate change to be bothered about such things come 2050.
 
People will be much more concerned with simply surviving the cataclysm of climate change to be bothered about such things come 2050.

Yeah, those prediction are a bit optimistic in this account. But then maybe AI will be utilized to help with the problem, only making it worse.
The main currently recognized and studied danger of AI isn't straight out malevolence, but misalignment. Our inability to specify what exactly we want, and AI then doing something slightly different, with completely opposite effect.
For example recommending suicide to psychiatric patients, because people with their diagnose often end up doing it.

It's even as a joke used to define AI: it is AI when we can't define the problem. But it has some depth to it.
 
I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.

https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.

Yep, absolutely. That was the median figure, but there's a significant minority who rates it as much more likely. It's also the figure for human extinction, if you ask for probabilities of other negative outcomes you get much higher figures.

But anyway, my point isn't that we should say "AI researchers have a generally high probability of human extinction from AI, so we should take that seriously*", but rather just that we should believe them when they say that's their view, and not be surprised if it motivates their actions.


*I absolutely do think we should take it seriously, by the way. I think the alignment problem is a very real problem, and one that we don't have solved, and something being "science fiction" isn't a reason to wave it off. Transformative new technologies generally have major impacts on society, it's entirely reasonable to expect new and important impacts from AI. And reasoning about what those are based on what we know now is just prudence.
But I also think that there are solutions to the issues (such as the alignment problem) that will arise as AI becomes more powerful. I doubt we will have any perfect solutions, but my view is that we will develop solutions that will be good enough that the benefits will outweigh the harms.

(AI will be optimized to do something other than exactly what we wanted to optimize it for, but I think we'll manage to make that thing close enough to what we wanted to limit the harms and capture some of the benefits.)
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

Except these weren't the typical finance dude-bros that make up a board. And it was a non-profit. It seems they fell for a finance dude-bro's pitch and let him build a cult of personality within the company. They figured out he was not only selling vaporware but also didn't seem too concerned with the repercussions of what they were trying to make. If OpenAI did stumble into real working AI, he'd have happily sold it to a company like Palantir.

Of course, the Great and Holy Market (peace be upon its money) demanded its blood sacrifice and they interfered with that.
 

Back
Top Bottom