• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

https://situational-awareness.ai

You can see the future first in San Francisco.

Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.

Let me tell you what we see.
 
I see truth.

Also, AI is not and never will be a bubble. Not unless commodity general-purpose computing is a bubble, which it clearly isn't.

I don't see how that follows.

Let's get more specific. Much of the current work on AI involves large language models. Building these things has consumed a lot of people's time, a lot of computation power, and a lot of actual power (as in electricity), and thus a lot of money.

And the results are impressive. But they've also been shallow. We get **** like Google recommending that people eat rocks or put glue on pizza. We get AI generated clickbait from failing media companies desperately trying to cut costs. We get lawyers submitting AI generated briefs with citations that don't exist. In other words, despite how big a leap these LLMs are compared to earlier attempts, they aren't actually that useful. There is little that LLMs can do that I would ever want to pay money for. Maybe some people would, but some people isn't necessarily enough people, or enough money. Maybe there's no way for LLMs to turn a profit on all the money that's been invested in them. And if that's the case, isn't that pretty much the definition of a bubble?

Now, that doesn't mean that AI is going away. We have had real estate bubbles, but real estate hasn't gone away. And AI consists of more than just LLMs. There are definitely applications where the incentive to pay for the outputs are a lot clearer. But useful and even profitable AIs don't preclude an AI bubble if enough money is going into AIs which aren't useful or profitable.

tl;dr: I think we are very much in the middle of an AI bubble, but AI will still be around after the bubble bursts.
 
tl;dr: I think we are very much in the middle of an AI bubble, but AI will still be around after the bubble bursts.

My interpretation of the prestige's comment was that he was referring to AI in general, as opposed to AGI. These two are often conflated.

AGI, on the other hand, has already been through several bubbles, each of which so far has been followed by a corresponding AI winter.
 
I don't see how that follows.

Let's get more specific. Much of the current work on AI involves large language models. Building these things has consumed a lot of people's time, a lot of computation power, and a lot of actual power (as in electricity), and thus a lot of money.

And the results are impressive. But they've also been shallow. We get **** like Google recommending that people eat rocks or put glue on pizza. We get AI generated clickbait from failing media companies desperately trying to cut costs. We get lawyers submitting AI generated briefs with citations that don't exist. In other words, despite how big a leap these LLMs are compared to earlier attempts, they aren't actually that useful. There is little that LLMs can do that I would ever want to pay money for. Maybe some people would, but some people isn't necessarily enough people, or enough money. Maybe there's no way for LLMs to turn a profit on all the money that's been invested in them. And if that's the case, isn't that pretty much the definition of a bubble?

Now, that doesn't mean that AI is going away. We have had real estate bubbles, but real estate hasn't gone away. And AI consists of more than just LLMs. There are definitely applications where the incentive to pay for the outputs are a lot clearer. But useful and even profitable AIs don't preclude an AI bubble if enough money is going into AIs which aren't useful or profitable.

tl;dr: I think we are very much in the middle of an AI bubble, but AI will still be around after the bubble bursts.

I got to listen to a talk given by the DoD's chief of AI. He articulated a principle of using AI as a tool, while being aware of its limitations. The DoD is very interested in using AI in warfare. But they have certain requirements: Specific use cases with measurable benchmarks, and an AI that is demonstrated to meet those benchmarks.

The reason we're getting so many hallucinations right now is that so far nobody is bothering to set or try to reach benchmarks for reliability. Not in the consumer space, anyway. I think we're on the verge of seeing the AI development company version of Lockheed-Martin. Companies building AIs to meet DoD contract specs, and after that, the sky's the limit.
 

I did a bit of digging into this guy. He seems to be at the opposite poll to someone like Sam Altman. While SA seems to be about full steam ahead to make profit for his company, this guy seems to be part of a group who had persuaded the Effective Altruists that the best way to save as many lives as possible is not to spend charitable donations on bed nets and water sanitation as before, but to adopt a long-term view based on ideas from Derek Parfit and Nick Bostrum. If superintelligence is not super aligned more people will die or never be born.

This idea was persuasive to him and Will MacAskil so they joined an organization funded by a whizz-kid who was making sick bank with the latest crypto scheme: this was, of course, FTX Future Fund a spin-off of Sam Bankman-Fried’s company.

Now, it doesn’t mean he’s wrong but somewhat undercuts his confident prognostications about the future.
 
I did a bit of digging into this guy. He seems to be at the opposite poll to someone like Sam Altman. While SA seems to be about full steam ahead to make profit for his company, this guy seems to be part of a group who had persuaded the Effective Altruists that the best way to save as many lives as possible is not to spend charitable donations on bed nets and water sanitation as before, but to adopt a long-term view based on ideas from Derek Parfit and Nick Bostrum. If superintelligence is not super aligned more people will die or never be born.

This idea was persuasive to him and Will MacAskil so they joined an organization funded by a whizz-kid who was making sick bank with the latest crypto scheme: this was, of course, FTX Future Fund a spin-off of Sam Bankman-Fried’s company.

Now, it doesn’t mean he’s wrong but somewhat undercuts his confident prognostications about the future.

AI as a thought experiment was in the beginning the exclusive province of people like this. Eventually at some point they realized they were just going to need buckets of money to pursue their ideals, and that meant having to compromise their principles and partner with VC techbros. But there will always be that layer of friction; AI prophets' hope for caution and deliberation when developing AI inhibits growth and interferes with profit, while VC techbros like Altman et al are expecting their returns to be maximized at every possible opportunity. One side has to win, and as we've already seen (and anyone could have predicted) that will always be the side with the capital.
 
I'm not sure how true that is. LLMs have received a lot of publicity lately, but I'm pretty sure there's been a lot of less public progress on other forms of AI computing.

I have to remain dubious on that, With the current lucrative hype-cycle surrounding anything "AI"-related any AI research org that reports progress can expect money to be thrown at them. So it makes little sense they would keep anything like that under wraps.

It's a space where being publicly first with something new matters, so every moment spent hiding something in hopes of a big reveal risks an enormous cost if another org who happened to be researching the same thing makes the first announcement.
 
I have to remain dubious on that, With the current lucrative hype-cycle surrounding anything "AI"-related any AI research org that reports progress can expect money to be thrown at them. So it makes little sense they would keep anything like that under wraps.

It's a space where being publicly first with something new matters, so every moment spent hiding something in hopes of a big reveal risks an enormous cost if another org who happened to be researching the same thing makes the first announcement.
It's not so much about keeping it "under wraps", as though it's deliberately being secretive. It's just not getting as much attention because it isn't as flashy.

Look behind the scenes at just about any area of automation and you'll see some AI going on. AI tools are being used practically everywhere.
 
It's not so much about keeping it "under wraps", as though it's deliberately being secretive. It's just not getting as much attention because it isn't as flashy.

Look behind the scenes at just about any area of automation and you'll see some AI going on. AI tools are being used practically everywhere.

I think in most cases, what we call AI is based on some kind of principles of "deep" learning, using large amounts of training data to spit out something that is more or less a good synthesis of it. It's GIGO essentially.

Therefore we have large language models (LLMs) based on large amounts of texts, or video and image generators based on large amounts of pictures, or music based on large amounts of music that has been fed into it. Essentially, it is pattern recognition.

However, we know that in some cases, the pattern recognition goes wrong because the training data contains elements that we as humans don't consider relevant, but end up being what the "AI" focuses on. For example, telling apart huskies from wolves in which the AI noticed that huskies typically appear alongside snow.
 
I see truth.

Also, AI is not and never will be a bubble. Not unless commodity general-purpose computing is a bubble, which it clearly isn't.

Historically, there is no stronger indicator that something is about to be a bubble when people categorically claim that it "is not and never will be a bubble".
 
Historically, there is no stronger indicator that something is about to be a bubble when people categorically claim that it "is not and never will be a bubble".
in this case I agree with theprestige. AI isn't going to go away. The genie is out of the bag, the milk has bolted. It's not a bubble.
 
in this case I agree with theprestige. AI isn't going to go away. The genie is out of the bag, the milk has bolted. It's not a bubble.

LLM chatbots which cost billions to run probably are a bubble. AI, as reaching and overtaking human capabilities, is IMHO inevitable.

A bubble refers not to the product itself but to the financial shenanigans that inflates its value. The bursting of the dot-com bubble didn't eliminate dot-com businesses, you can still buy tulips.
 
in this case I agree with theprestige. AI isn't going to go away. The genie is out of the bag, the milk has bolted. It's not a bubble.

Funny equivalency from " it's a bubble" to " here to stay".

I don't think what passes for AI these days is here to stay or become ubiquitous - it's just not good enough.
Even its hype people admit that it's all about raising money for something that is actually capable of becoming the next mobile phone/ PC/ Internet.

This is just another incarnation of the " I should have invested in Amazon/Apple/Tesla when it was worth nothing" scaremongering, only AI is already overpriced.

I'm not saying that actual AI might not have a rosy future.
But what we see now is very clearly a bubble
 
AI has the the primary marker of a bubble, in my opinion: practically all of the hype, especially informing investment, is centered entirely around expected future capability that is assumed to be "inevitable". You should spend a whole lot of money on products that are mediocre, broken, or otherwise crappy, or even just plain non-existent right now because future versions will definitely be amazing.

Facebook CEO Mark Zuckerberg bought so deeply into crypto-"web3"-metaverse hype that he made his company invest billions of dollars researching and positioning itself to become the Google or Microsoft of the metaverse; not just offering a client or hosting its own virtual world but developing backend systems and writing and promoting technical protocols and standards that other entrants in the metaverse space would choose and then eventually be compelled to use in order to establish seamless compatibility with the wider, collective metaverse. He even changed the company's name from Facebook to Meta, both for the publicity but also as a power-move to imply an ownership or original authority over the Metaverse that everyone was eventually going to start using because it was web3 after all, the inevitable next phase of the internet that everyone already uses.

Except that didn't happen. The technology was there, it worked fine, but it turns out that outside of a small pool of sci-fi nerds people generally aren't interested in walking around with computer-goggles glued to their faces all day, and no amount of snazzy features or cool aesthetic can make them endure it for more than a couple of minutes. And this is something that's been repeatedly tested and demonstrated - starting with Google Glass in 2013, and then Microsoft HoloLens in 2016, and then strangely recently with Apple's Vision Pro this year. Hardcore fans buy bunches at the launch, and then sales rapidly tank when nobody else cares.

With the "metaverse" it's that, but even more, because nobody was interested in using the traditional web clients either. From the Second Life days, no matter how excited techbros get themselves over the idea, virtual world/metaverse applications are always niche products for a niche customer base that can't support or justify billions of dollars of investment.

It remains to be seen whether that is the ultimate fate of AI; but the early numbers don't make me optimistic.
 
Indeed.

A.I is very much in the VR spot in that people like to play with it, but don't want to pay very much for it or would trust their lives to it if they have the choice: most people don't want autonomous terminators.

As things stand, there is also no reason to assume that AI will follow Moore's law or that its utility will increase exponentially with connectivity.
 
Last edited:

Back
Top Bottom