Ivor the Engineer
Penultimate Amazing
- Joined
- Feb 18, 2006
- Messages
- 10,584
Ah okay, I stand corrected! There is some limited feedback when a transformer-based AI is doing inference. I don't think the highlighted would work. I suspect the output would quickly become gibberish.The feedback is there though .. the output of the network is part of the prompt when you ask again. Oh wait, even more than that. LLMs generate token at a time .. so when they are outputting second token they also react on their first token, etc.
So LLM also generate its own prompt. They only react when prompted, sure .. but that's artificial limitation. Obviously you can simply loop the output. LLMs also have to be trained to stop the response at some time (to emit special stop token). The totally can blab forever.
I think that for an AI to be similar to a NI would require content addressable memory that dynamically updates with new inputs. It'll probably need some noise too. At the moment AI is auto-complete on steroids.

