Maybe - but, I don't know, I kind of think that presumes AI is going to be thinking through its trolley problems like a human would only faster, and I'm not sure that's true at all.
I think the confusion is partly due to the way we frame the issue - we talk about AIs having "alignment", and teaching them things like values and ethical principles and then, I guess, just letting them mull those over and come to conclusions? But I don't see that as likely or even prudent; I see the more realistic course of action being just a line of code telling the AI that if there's a human in the way it should stop, end of story.
In other words, I don't see us trying to "teach AI ethics" and allowing it to make ethical decisions, I see humans making all of the ethical decisions long ahead of time and then just giving the AI a set of standing instructions.