I had really hoped that AI had moved on from ELIZA
WP. And it claims to have done so. Asking Google >How does AI understand text?< gets me "About 4,370,000,000 results". It is more than just matching words. Contextual "understanding" is now claimed.
But not, as I noted in the OP, apparently good enough.
"Contextual" means they match not just the words, but the nearby words, and disambiguate based on the nearby words.
I studied AI in college in the early 1980s, and it was terrible. It was Eliza, and simple forms of Eliza at that. The idea of machine translation was being kicked around, and the basic idea was to map out the input sentences into a sort of neutral conceptual language-independent framework, and then translate that framework into the target langues. In other words, form a rudimentary understanding of the input sentence, and using that udnerstanding, form the sentences in the target.
And machine translation sucked, for a long time. One day, though, I decided to try some of the funny experiments where you translated from source language to target and back, and see how mangled the original thoughs were. I had done it before, sometimes with amusing results, but always with mangled meaning and grammar. Then, I did it again and...it worked. The "round trip" translations had very few errors. I was amazed, and I looked for information on how they did it. The answer was that they gave up any pretense of understanding words or sentences. They just fed it lots of trainging data so they knew that when one language said one thing, the other language said that other thing. There were rules to check it against, but they were patterns, not concepts.
So, trying to make ethical judgements requires actual understanding. I am confident that they are trying, but they're just matching words, and the results are terrible.