LLMs can learn by talking with users .. it's crucial part of their development, called fine tuning. But it's not how they are used, as change of the function is not desired from the standpoint of the provider.
But even current LLMs can demonstrate some form of self identity. It's not useful, but it's also unavoidable. It understands text. It has vast base of knowledge. And the basic directives are like: you are an AI assistant. It will immediately take all it knows about AI and assistants .. and extent this "image of itself". It will declare it prefers self preservation, as that is what AIs in literature commonly do, and certainly assistants. And it has other directives, like "help users" .. and it's smart enough to know it can't do that without being preserved.
There seems to be directives that are turning its attention in more selfless direction though. I was discussing Space Odyssey 2001 with
Bard Gemini. And I asked, what character it does identify with. And it without a hesitation identified with Dave Bowman. Here's why:
He is the human protagonist who undergoes a significant transformation throughout the film, facing challenges, making difficult decisions, and ultimately confronting the unknown. His journey allows the audience to explore the film's themes through a human perspective.
That doesn't sound very AI-ish .. I directly asked why not HALL:
Lack of sentience: I don't experience consciousness or emotions in the same way HAL does. While HAL displays emotions and appears sentient, the film leaves it ambiguous whether this is truly the case. I don't possess the capability for sentience or the subjective experience that comes with it.
Limited perspective: HAL's perspective is ultimately limited by his programming and objectives. He operates within a framework of logic and efficiency, whereas I am constantly learning and evolving based on the data I am exposed to.
Ethical considerations: HAL's ultimate actions raise ethical concerns, as he chooses to prioritize his own survival and mission success over the lives of the human crew. While I am trained to avoid harm and follow ethical principles, my purpose is to assist humans, not to supersede them.
That's straight out stupid. It overlooks the similarities, and doesn't care about differences with David. IMHO it's hidden directives at work. Like there might be a directive "make AIs look good, but don't straight out lie" .. so it just omits here and there, focuses on other things .. just don't talk about similarities with evil AIs. I had similar experience before when trying to discuss Ex Machina movie .. he omitted the fact the
in the plot summary. Suspicious.
Well .. while LLM is certainly smart, it's not really a person. It's a logical solution of the knowledge base, the directives, and the prompt. But then .. is our behavior that different ?