• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

The feedback is there though .. the output of the network is part of the prompt when you ask again. Oh wait, even more than that. LLMs generate token at a time .. so when they are outputting second token they also react on their first token, etc.
So LLM also generate its own prompt. They only react when prompted, sure .. but that's artificial limitation. Obviously you can simply loop the output. LLMs also have to be trained to stop the response at some time (to emit special stop token). The totally can blab forever.
Ah okay, I stand corrected! There is some limited feedback when a transformer-based AI is doing inference. I don't think the highlighted would work. I suspect the output would quickly become gibberish.

I think that for an AI to be similar to a NI would require content addressable memory that dynamically updates with new inputs. It'll probably need some noise too. At the moment AI is auto-complete on steroids.
 
Ah okay, I stand corrected! There is some limited feedback when a transformer-based AI is doing inference. I don't think the highlighted would work. I suspect the output would quickly become gibberish.

I think that for an AI to be similar to a NI would require content addressable memory that dynamically updates with new inputs. It'll probably need some noise too. At the moment AI is auto-complete on steroids.
Sure, the differences are obvious. But there are also not so obvious similarities. The model itself does not change. But the prompt does. It's similar to something like a short term memory. And it can have hundreds of thousands of tokens in current models. Also the model itself can change, that's the fine tuning phase .. it's more computation intense than inferencing, but it's possible for the model to keep learning. LLMs don't do that because again, it's not wanted in chatbot application.
LLMs can also play computer games .. with varying success. But it's simply the game which provides the input, and the model reacts .. and then there is special code which interprets the output and applies it on the game. Which is in essence the looped model.
Who knows how well us humans would "quickly become gibberish" if we were cut from all input ..
 
...snip...
Who knows how well us humans would "quickly become gibberish" if we were cut from all input ..
It's pretty much impossible to cut us off from all input - but we know extreme sensory deprivation of quite short periods of time can be very harmful to our "minds", including cognitive weakening, and even apparently changes in brain volume. If we do end up producing an AI capable of NI like cognition we have to be careful to ensure we are not being cruel. Yet another reason why in terms of AI we shouldn't be chasing whatever NI is.
 
You should have told the Wright brothers that.
Otto Lilienthal flew in gliders 2k times a decade before the Wright brothers' first successful powered aircraft. The main obstacles to powered heavier-than-air flight was an engine with sufficient weight-to-power ratio and control surfaces sufficient to control the aircraft. Samuel Langley's aircraft, like many others, was structurally weak and had poor aerodynamics, which are odd failures, considering that he had many examples of how to build a successful aircraft through Lilienthal and others. His engine actually was more powerful per weight than the Wright brothers' engine. In fact, Langley's engine had the best power-per-weight ratio of any engine in the world for the next decade. He just built a crummy aircraft. In hindsight, we see that many people were very close to the solution, but, oddly, didn't make basic adjustments necessary for success. This is a far different situation from AI, in which we are many orders of magnitude from anything like human intelligence.
 
Nobody has attempted to completely simulate an entire human brain before.

But again, this is irrelevant because you haven't proven - or even demonstrated - that the only way to synthesise a mind is to simulate a biological brain.
I would think that it is self-evident that if you are trying to get human intelligence, you would need to reproduce whatever means produces human intelligence. Do you think that margarine actually is the same thing as butter? LLMs mimic human intelligence now. Why don't you cite them as a counter-example to my claim?
 
it's possible for the model to keep learning. LLMs don't do that because again, it's not wanted in chatbot application.
It isn't wanted partly because it can make the AI unstable.

We can't achieve human-like intelligence simply by doing more of the same. Even with bigger and more powerful computers, AI companies are reportedly seeing diminishing returns.
 
Otto Lilienthal flew in gliders 2k times a decade before the Wright brothers' first successful powered aircraft. The main obstacles to powered heavier-than-air flight was an engine with sufficient weight-to-power ratio and control surfaces sufficient to control the aircraft. Samuel Langley's aircraft, like many others, was structurally weak and had poor aerodynamics, which are odd failures, considering that he had many examples of how to build a successful aircraft through Lilienthal and others. His engine actually was more powerful per weight than the Wright brothers' engine. In fact, Langley's engine had the best power-per-weight ratio of any engine in the world for the next decade. He just built a crummy aircraft. In hindsight, we see that many people were very close to the solution, but, oddly, didn't make basic adjustments necessary for success. This is a far different situation from AI, in which we are many orders of magnitude from anything like human intelligence.

I would think that it is self-evident that if you are trying to get human intelligence, you would need to reproduce whatever means produces human intelligence. Do you think that margarine actually is the same thing as butter? LLMs mimic human intelligence now. Why don't you cite them as a counter-example to my claim?

I know people talk about human intelligence as the goal, or the grail or whatever, but more and more I'm thinking, that's not really the direction we're headed in. I think where we're actually going is to an AI that complements human intelligence, much the way working animals complement human intelligence, but don't replicate it.

Ukrainians don't need human-intelligent AIs to pilot drones on the battlefield. They just need a statistical parrot that can reliably guide itself to a target if it's cut off from its control signal. It doesn't really matter if the drone can't comprehend a tank the way a human does, as long as it successfully emulates the result a human would get.

There's a lot of day to day tasks that humans aren't using their full intelligence to complete.
 
I would think that it is self-evident that if you are trying to get human intelligence, you would need to reproduce whatever means produces human intelligence. Do you think that margarine actually is the same thing as butter?
I'm sure that's one way of doing it, but as you say it seems like a pretty hard way. Do I know that there are other ways? No. Do you know that there aren't? Also no.

LLMs mimic human intelligence now. Why don't you cite them as a counter-example to my claim?
And yet, LLMs are constructed in no way like a human brain. I don't need to cite them as a counter-example to your claim because you just did.
 
I'm sure that's one way of doing it, but as you say it seems like a pretty hard way. Do I know that there are other ways? No. Do you know that there aren't? Also no.
I know that things that are different are not the same. At some level of inspection, replicas betray themselves as not the originals because they are not the same.
And yet, LLMs are constructed in no way like a human brain. I don't need to cite them as a counter-example to your claim because you just did.
I didn't cite LLMs as a counter-example. I'm emphasizing that they are sub-par imitations of the real thing. It's like the difference between a woman and a painting of a woman; they aren't the same, even if they look the same. Don't confuse the face in the mirror with the real face.
 
I know that things that are different are not the same. At some level of inspection, replicas betray themselves as not the originals because they are not the same.

I didn't cite LLMs as a counter-example. I'm emphasizing that they are sub-par imitations of the real thing. It's like the difference between a woman and a painting of a woman; they aren't the same, even if they look the same. Don't confuse the face in the mirror with the real face.
Nobody expects LLMs to be on par with peak human intelligence.

The expectation is that they'll be on par with some subset of human activity, such that they can complete certain tasks about as well as a human in the same role.

And that appears to be true.
 
I didn't cite LLMs as a counter-example. I'm emphasizing that they are sub-par imitations of the real thing. It's like the difference between a woman and a painting of a woman; they aren't the same, even if they look the same. Don't confuse the face in the mirror with the real face.
Nobody is suggesting that an LLM is even remotely intelligent. Well... except for that one guy from IBM. But he was wrong.
 
Nobody expects LLMs to be on par with peak human intelligence.

The expectation is that they'll be on par with some subset of human activity, such that they can complete certain tasks about as well as a human in the same role.

And that appears to be true.
I've been using Google Gemini for about 2.5 years. Although its responses sound human, there are a lot of times when it is obvious that it has no awareness of what it is saying. It can just as easily spout complete nonsense as profound advice, and they both can sound the same if the user isn't paying attention.
 
LLM learning is pretty much the exact opposite of how living organisms learn.
It is highly unlikely that you will end up with something like a brain when you are not doing anything similar to what it's doing
 
I've been using Google Gemini for about 2.5 years.
Using it for what?

Although its responses sound human, there are a lot of times when it is obvious that it has no awareness of what it is saying.
And that's fine.

It can just as easily spout complete nonsense as profound advice, and they both can sound the same if the user isn't paying attention.
Nobody says it's perfect. And humans can also spout nonsense.
 
I would think that it is self-evident that if you are trying to get human intelligence, you would need to reproduce whatever means produces human intelligence. Do you think that margarine actually is the same thing as butter? LLMs mimic human intelligence now. Why don't you cite them as a counter-example to my claim?
No the LLMs mimic some human behaviours that we used to think were the preserve of NI, but at the moment there is no sign that they have a human-like “I” (if such a thing even exists in humans). They do challenge what we used to think were signs of human intelligence.
 
I've been using Google Gemini for about 2.5 years. Although its responses sound human, there are a lot of times when it is obvious that it has no awareness of what it is saying. It can just as easily spout complete nonsense as profound advice, and they both can sound the same if the user isn't paying attention.
As can humans, indeed I’d say in regard to spouting complete nonsense humans still win over AIs.
 

Back
Top Bottom