That's assuming self-preservation is one of its goals. I wouldn't assume that's going to be the case.
Life has self-preservation built in because the stuff that didn't, didn't last. But AI's don't reproduce, and they don't experience natural selection. They experience artificial selection from humans. If we don't either explicitly program in self preservation or implicitly select for it, there's no reason to expect it.
Self-preservation is a reasonable sub-goal given most goals that you actually give your AI. If you've got an over-engineered agentic intelligent alarm clock that's designed with the goal of waking you up in the morning, and someone shuts it off, it's not going to achieve it's goal of waking you up at the specified time. If its intelligent enough it can predict that outcome, and also the chance of it happening. If that chance is high enough it might reasonably plan countermeasures (access to a secondary power source, say), not for the sake of self-preservation, but for the sake of achieving the explicit goal of waking you up.
The whole point of an intelligent agentic system is that it can make predictions about future states and plans about how to deal with them, including forming sub-goals. And self-preservation is so necessary to the achieving of its explicit goals (the ones you created it for), that it's reasonable to expect it to arise.