The response of X seems to me to be utterly inadequate and although I'm not a lawyer, I'd guess that it would be sufficient pass legal thresholds for liability in many jurisdictions.
I'm not a lawyer either. Being married to one and routinely associating with others creates an interest in law, but no special expertise—and certainly no qualification or authority to do more than offer lay comment.
Generally, the closer you are to the ultimate consequences of a chain of events, the more responsible you are for the outcome. The maker of a tool that can create all manner of images without regard to purpose is generally less liable than someone who intentionally uses it to produce forbidden content or content directed at an impermissible purpose. That person is closest to the circumstances from which the unlawful conduct arose, and is therefore more directly responsible. This covers a lot of law, and is represented in the famous you-can't-make-this-up case of
Palsgraf v. Long Island Railroad.
There is another general doctrine that says that your responsibility to exercise discretion in controlling something is roughly proportional to your ability and willingness to do so. I can imagine Elon Musk saying something like, "We don't apply our discretion in controlling content produced by our AI, so any objection you might have should properly go to the person who used the AI."
That fails here for a couple of reasons. First, Musk is quite evidently exercising control over his AI's content. Experiments have shown that it is biased in his favor. If it can be programmed to produce content that strokes Musk's ego, it should be programmed not to produce unlawful content. Second, this doctrine doesn't really fly in the real world. Operators of content-producing and content-hosting services (including this forum) prudently restrict content under the rubric that any tolerance of it would easily be seen as approval and might lead to at least reputational damage if not outright aiding the commission of a crime. And there are plenty of reasons to curate content that aren't driven by avoiding legal liability. We don't allow certain naughty words because we don't want to be that kind of forum.
If it is too difficult to prevent it creating partially undressed images of real people without their consent, maybe the product should not be available.
This is an ongoing debate. In many cases besides this one, we find that the morality of a behavior wasn't an issue until the behavior became possible. There's no need to expressly forbid something that isn't possible to do. If you cello-tape a picture of your neighbor onto a pinup of a supermodel, there's little injury and therefore it's not very morally blameworthy. Everyone can tell it's fake. The ability to effortlessly create convincing (if not outright photorealistic) depictions is the step that incurs moral scrutiny. But then whose fault is it?
Here it's not so much a tool as it is a hosted service. I can sell you a tree shredder and wish you the best of luck as you drive away with it in tow. What you do with it thereafter is not my business and not my responsibility. There are plenty of legitimate reasons to possess and use a tool comprised of whirling sharp blades attached to a powerful motor. But if instead I own a shredder and I offer to shred things with it on your behalf, then I become a moral agent in the shredding decisions. If you show up with a rolled-up carpet with someone's feet sticking out of it, my decision to allow you to use my equipment to shred it then becomes something I'm partially responsible for. Not only
can I exercise discretion in that case, I might be obliged to.
I would say that the analogy would be Bushmaster running a free "name your target" service where people ask a Bushmaster robot to shoot something. And when they find their robot is shooting schools, instead of stopping the free service, simply asking people to not do it.
This is pretty much the right way to think about it. You can initially hope to be agnostic about the targets, but under the law you should know that killbots are inherently and intentionally dangerous, and therefore that your desire to operate a killbot service comes with some legal risk to you that can't be shifted wholly to the client. Keeping your head in the sand isn't a panacea.
A service that can produce any kind of content must contemplate the possibility of its producing unlawful content. The operator of that service bears some responsibility if he can know or should have known that the use of the service was directed at an unlawful end. There may be many lawful ends for some content that might seem questionable on its face, but for CSAM there really are no legitimate purposes that a random client could have. The defense of agnostistism isn't very convincing in that case.
The legal doctrine here might be akin to
res ipsa loquitur—"the thing speaks for itself." This is where we get the concept of strict liability. Intent is irrelevant in those cases. The mere fact that something bad happened is enough to say that someone is liable even if they did not intend the harm. Thus the mere existence of CSAM created by your service is all the evidence that's needed that you, the operator of the service, are liable for it because you had a minimum duty of care.
As far as responsibility is concerned, would it actually matter in law whether the body is offering a service through an AI agent, or through a person? Beyond personal liability for the hypothetical person who might be providing the service in place of the AI.
Substituting a human agent for an AI tool invokes the
respondiat superior doctrine. The principal is responsible for the actions of his agent. If I hire an artist to produce unlawful content at a client's request, I am just as responsible as if I had used an AI tool to do it myself.
Remember, it's just software.
Would it be similar to organisations providing services using kids? Presumably cases have occurred with organisations like the girl scouts, and presumably there would be a responsible adult?
I'm not sure how you're trying to tie this into the question. There are, for example, talent agencies that specialize in representing minors for employment in situations where they might be photographed or filmed, and thereby subject to various contrived depictions. In those cases there is quite definitely a need for a legally recognized guardian, and that guardian would be very liable if they knowingly or negligently allowed their minor ward to participate in the production of, say, sexualized content. Additionally the employer often has some responsibility
in loco parentis that applies.
In theater, the director and producers of a show have a duty of care for the safety and well-being of minor participants. Minor participants must have guardians with a clear legal duty to care for the minor. Agencies that represent minors in procuring such employment have due diligence requirements.