• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

I am not worried about artificial intelligence.

Hell, not even sure at times that there is even an organic one.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.

Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.
 
Last edited:
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.

anyone with pretty pricey toys.
 
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.

At least current AI models are capital intensive, you need a lot of time with a lot of GPUs.
 
At least current AI models are capital intensive, you need a lot of time with a lot of GPUs.

Thought that was only for "instant" response times and multiple people accessing it?

Certainly, in the generative AI space you can run most locally if you are happy with a much slower generation time. I can run Stable diffusion locally on my PC and even on my iPad.

Strikes me that some of the fear seems very similar to that generated by the advent of "genetic engineering" with DIY CRISPR kits becoming available.
 
I think a more serious threat than AI is the apparent propensity of much of the population for treating AI (or even presumed AI) as some sort of oracle. Why on earth are so many people ready to uncritically outsource thought itself? I've never had a high opinion of the wisdom of the masses but this is beyond even my most misanthropic pessimism. Software is a tool, nothing more, and no tool is suitable for every purpose. It's silly enough to make gods out of imagination, it's beyond ridiculous to make gods out of things we've actually made ourselves!
 
Thought that was only for "instant" response times and multiple people accessing it?

Certainly, in the generative AI space you can run most locally if you are happy with a much slower generation time. I can run Stable diffusion locally on my PC and even on my iPad.

Strikes me that some of the fear seems very similar to that generated by the advent of "genetic engineering" with DIY CRISPR kits becoming available.

Well stable diffusion models have 2 to 6 gigs. That will fit in modern GPU (12-16GB).
GPT4 has 1 trillion parameters. Even as 4 bit numbers (which are used in LLMs in a pinch) it is still 500 gigabytes. You can feed it layer by layer into GPU, and it's commonly done .. but it's slow. And you need 1 evaluation of the whole network to get 1 word out.
But then there are smaller LLMs you can run at home. You don't need all the languages, all the knowledge in the world. There are decent LLMs which will fit into common GPU, with limited capabilities.
But you certainly can experiment with AI, and you can develop AI, all alone.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.

What's really scary is how much of our society they actually are running.
 
Order restored:

Sam Altman restored as OpenAI CEO after his tumultuous ouster

SAN FRANCISCO, Nov 22 (Reuters) - ChatGPT-maker OpenAI has reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the future of the startup at the center of an artificial intelligence boom.

The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.

Both staunch capitalists I'm sure. We can all sleep easy again.
 
Thought that was only for "instant" response times and multiple people accessing it?

I'm talking about the training part. Once you've trained the model, yeah, that's different.

ETA: There's a reason that the recent Biden administration executive order on AI called for a duty to report training any model with more than 1026 flops, as well as report what safety precautions you are taking.
 
Last edited:
Oh Noes! :scared:

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse

Five days of chaos at OpenAI revealed weaknesses in the company’s self-governance. That worries people who believe AI poses an existential risk and proponents of AI regulation.

Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.

Well, "the good of humanity" may now be secondary in the list of concerns of the new board of directors. Whether the previous board had a correct understanding of that concept or not is a separate question. If you actually believe that the plot of Terminator, or something like that, is an actual existential risk for humanity, and not merely Science Fiction, perhaps there is an argument to be made for the action they took. Although in hindsight it seems to have been completely ineffective and perhaps even contrary to that goal.
 
Was there some sort of recent breakthrough towards achieving AGI?

Sam Altman's Ouster Followed Dangerous AI Breakthrough Claim: Reuters

According to the news agency, sources familiar with the situation said researchers sent a letter to the OpenAI board of directors warning of a new AI discovery that could threaten humanity, which then prompted the board to remove Altman from his leadership position.

These unnamed sources told Reuters that OpenAI CTO Mira Murati told employees that the breakthrough, described as “Q Star” or “(Q*),” was the reason for the move against Altman, which was made without participation from board chairman Greg Brockman, who resigned from OpenAI in protest.

This mysterious 'Q*' sounds like excellent fodder for conspiracy theorists, whether it actually exists or not.
 

Back
Top Bottom