I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.
At least current AI models are capital intensive, you need a lot of time with a lot of GPUs.
Thought that was only for "instant" response times and multiple people accessing it?
Certainly, in the generative AI space you can run most locally if you are happy with a much slower generation time. I can run Stable diffusion locally on my PC and even on my iPad.
Strikes me that some of the fear seems very similar to that generated by the advent of "genetic engineering" with DIY CRISPR kits becoming available.
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.
What's really scary is how much of our society they actually are running.
SAN FRANCISCO, Nov 22 (Reuters) - ChatGPT-maker OpenAI has reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the future of the startup at the center of an artificial intelligence boom.
The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.
Thought that was only for "instant" response times and multiple people accessing it?
Ooh, Freudian slip there.Why do we scare you?

Five days of chaos at OpenAI revealed weaknesses in the company’s self-governance. That worries people who believe AI poses an existential risk and proponents of AI regulation.
Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.
According to the news agency, sources familiar with the situation said researchers sent a letter to the OpenAI board of directors warning of a new AI discovery that could threaten humanity, which then prompted the board to remove Altman from his leadership position.
These unnamed sources told Reuters that OpenAI CTO Mira Murati told employees that the breakthrough, described as “Q Star” or “(Q*),” was the reason for the move against Altman, which was made without participation from board chairman Greg Brockman, who resigned from OpenAI in protest.
Zvi has a pretty good post today about the whole situation and what we know so far.