• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

It seems to me that human dreams may be a closer analog to LLM hallucinations. To me, dreams have a very Markov-chain feel to them, as my brain is constantly choosing what happens next based on what just happened. No planning, just sequential extrapolation based on what's happened in the dream so far, whatever is going on in my life at the moment, my mood, etc. As I understand it,
that sort of sequential extrapolation is how LLMs choose the next word.
I said above that LLMs mimic human behaviour, as I don't believe they "think", they are not duplicating the "how" of how humans think BUT I also believe they should make us rethink about a lot of our assumptions that we "think".
 
"dysfunction of the sensory apparatus" they aren't, they are what your quoted source says they are. In a visual hallucination my sensory apparatus i.e. eyes aren't malfunctioning.
There's more to the sensory apparatus than just the eyes. The optic nerves and visual cortex are part of the sensory apparatus.

That aside I would have much preferred if they had used the word "malfunctioning" for LLM AIs malfunctioning, using a word like hallucination helps feed into the idea that these LLMs are doing more than mimicking human behaviour.
But LLMs aren't malfunctioning when they hallucinate. They are functioning as designed.
 
There's more to the sensory apparatus than just the eyes. The optic nerves and visual cortex are part of the sensory apparatus.
Argue with your own source.
But LLMs aren't malfunctioning when they hallucinate. They are functioning as designed.
No they are not. Or rather they are functioning as designed if you also say planes are functioning as designed if their engines stop in flight and the plane plummets towards the ground! No one designed any of the current LLMs to provide inaccurate results or malfunction. Now what appears to be the case is that there are fundemental issues in the principles behind current LLM AIs that mean you can't be certain in a mathmatical sense they won't hallucinate. Which is why most currently "leading edge" LLM AIs are in fact no longer "pure" LLMs. Many different approaches are being tried. My personal thoughts are that one of the reasons they hallucinaten is because they have been trained on vast amounts of "corrupted" data, in other words data created by humans and we know humans lie, cheat, make up ◊◊◊◊◊◊◊◊ and so on and all that is in the data that forms the bulk of their training data. They are mimicking human behaviour because they have been trained on a dataset created by human behaviour.
 
No they are not. Or rather they are functioning as designed if you also say planes are functioning as designed if their engines stop in flight and the plane plummets towards the ground! No one designed any of the current LLMs to provide inaccurate results or malfunction.
No, they designed the LLM to produce coherent sentences, without checking whether those sentences were actually true. They designed them to produce output that appeared to be a factual statement, but without determining whether they actually were.

Using your aircraft analogy, an aircraft autopilot, if set in the wrong direction, will continue to fly in that direction, as it was designed to do, without checking whether that direction is correct.
 

Back
Top Bottom