• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Cont: Musk buys Twitter II

Manipulating an image using Photoshop is a skill that must be learned. It takes time, a great deal of effort, and probably an expensive course or two.

Manipulating an image using AI is as simple as typing "show me a picture of Jennifer Lawrence in a bikini".

They're not the same.
 
There's a legitimate purpose in, say, a parent uploading an innocent picture of their minor child and asking it to perform innocent manipulations such as placing the child in a fairy tale setting. That's neither unawlful nor (IMHO) morally objectionable. It's usually more effective to apply content restrictions at the output. So an effective prompt would be something like, "Do not release images that are both sexualized (or, say, violent) and depict persons that appear to be minors." That would apply after the image is generated but before it's presented to the user. But yes, an AI's ability to estimate the age of people in images would be prima facie evidence that such restriction is technically possible.
I'm willing to believe that there are some non-illegitimate reasons for wanting to make lifelike images of real people with most of their clothes removed, but none that are so vital that they need access to a free service with no oversight.

The response of X seems to me to be utterly inadequate and although I'm not a lawyer, I'd guess that it would be sufficient pass legal thresholds for liability in many jurisdictions.

X could have stopped this service as soon as it was aware of the problem, with very few adverse consequences to anyone. Either their safety systems are inadequate, if it was reported but didn't escalate it to someone with authority to prevent it, or the person with responsibility didn't do their job. And we know that the owner has been aware of this for some time from external information if not internal reports, so I'd say Musk is responsible for not stopping it as soon as he knew.

If it is too difficult to prevent it creating partially undressed images of real people without their consent, maybe the product should not be available.
 
I'm no lawyer, but I do know that lines get crossed when a product/service is being used for nefarious purposes with the provider's knowledge. If the Sandy Hook shooter had ordered a rifle specifically requesting one that would be good at killing a classroom full of children, then yeah, Bushmaster is at least partially liable. Musk definitely cannot claim ignorance about what his product is being used for at this point.
I would say that the analogy would be Bushmaster running a free "name your target" service where people ask a Bushmaster robot to shoot something. And when they find their robot is shooting schools, instead of stopping the free service, simply asking people to not do it.


Elsewhere my view is

Well I guess on whether one thinks that having AI systems making widely-illegal deepfake images of specific minors is a price worth paying for the ability to easily make somewhat amusing memes. I tend to think it's not.
 
@JayUtah

As far as responsibility is concerned, would it actually matter in law whether the body is offering a service through an AI agent, or through a person? Beyond personal liability for the hypothetical person who might be providing the service in place of the AI.

Would it be similar to organisations providing services using kids? Presumably cases have occurred with organisations like the girl scouts, and presumably there would be a responsible adult?
 
Well I guess on whether one thinks that having AI systems making widely-illegal deepfake images of specific minors is a price worth paying for the ability to easily make somewhat amusing memes. I tend to think it's not.

I think a more straightforward theory is that the right just really likes to sexualize children. It tracks with a lot other behavior we see from them.
 
I think a more straightforward theory is that the right just really likes to sexualize children. It tracks with a lot other behavior we see from them.
So you're thinking more along the lines of?

Well I guess it depends on whether one thinks that having AI systems making somewhat amusing memes is a price worth paying sufficient cover for the ability to easily make widely-illegal deepfake images of specific minors . I tend to think it's not.
 
The response of X seems to me to be utterly inadequate and although I'm not a lawyer, I'd guess that it would be sufficient pass legal thresholds for liability in many jurisdictions.
I'm not a lawyer either. Being married to one and routinely associating with others creates an interest in law, but no special expertise—and certainly no qualification or authority to do more than offer lay comment.

Generally, the closer you are to the ultimate consequences of a chain of events, the more responsible you are for the outcome. The maker of a tool that can create all manner of images without regard to purpose is generally less liable than someone who intentionally uses it to produce forbidden content or content directed at an impermissible purpose. That person is closest to the circumstances from which the unlawful conduct arose, and is therefore more directly responsible. This covers a lot of law, and is represented in the famous you-can't-make-this-up case of Palsgraf v. Long Island Railroad.

There is another general doctrine that says that your responsibility to exercise discretion in controlling something is roughly proportional to your ability and willingness to do so. I can imagine Elon Musk saying something like, "We don't apply our discretion in controlling content produced by our AI, so any objection you might have should properly go to the person who used the AI."

That fails here for a couple of reasons. First, Musk is quite evidently exercising control over his AI's content. Experiments have shown that it is biased in his favor. If it can be programmed to produce content that strokes Musk's ego, it should be programmed not to produce unlawful content. Second, this doctrine doesn't really fly in the real world. Operators of content-producing and content-hosting services (including this forum) prudently restrict content under the rubric that any tolerance of it would easily be seen as approval and might lead to at least reputational damage if not outright aiding the commission of a crime. And there are plenty of reasons to curate content that aren't driven by avoiding legal liability. We don't allow certain naughty words because we don't want to be that kind of forum.

If it is too difficult to prevent it creating partially undressed images of real people without their consent, maybe the product should not be available.
This is an ongoing debate. In many cases besides this one, we find that the morality of a behavior wasn't an issue until the behavior became possible. There's no need to expressly forbid something that isn't possible to do. If you cello-tape a picture of your neighbor onto a pinup of a supermodel, there's little injury and therefore it's not very morally blameworthy. Everyone can tell it's fake. The ability to effortlessly create convincing (if not outright photorealistic) depictions is the step that incurs moral scrutiny. But then whose fault is it?

Here it's not so much a tool as it is a hosted service. I can sell you a tree shredder and wish you the best of luck as you drive away with it in tow. What you do with it thereafter is not my business and not my responsibility. There are plenty of legitimate reasons to possess and use a tool comprised of whirling sharp blades attached to a powerful motor. But if instead I own a shredder and I offer to shred things with it on your behalf, then I become a moral agent in the shredding decisions. If you show up with a rolled-up carpet with someone's feet sticking out of it, my decision to allow you to use my equipment to shred it then becomes something I'm partially responsible for. Not only can I exercise discretion in that case, I might be obliged to.

I would say that the analogy would be Bushmaster running a free "name your target" service where people ask a Bushmaster robot to shoot something. And when they find their robot is shooting schools, instead of stopping the free service, simply asking people to not do it.
This is pretty much the right way to think about it. You can initially hope to be agnostic about the targets, but under the law you should know that killbots are inherently and intentionally dangerous, and therefore that your desire to operate a killbot service comes with some legal risk to you that can't be shifted wholly to the client. Keeping your head in the sand isn't a panacea.

A service that can produce any kind of content must contemplate the possibility of its producing unlawful content. The operator of that service bears some responsibility if he can know or should have known that the use of the service was directed at an unlawful end. There may be many lawful ends for some content that might seem questionable on its face, but for CSAM there really are no legitimate purposes that a random client could have. The defense of agnostistism isn't very convincing in that case.

The legal doctrine here might be akin to res ipsa loquitur—"the thing speaks for itself." This is where we get the concept of strict liability. Intent is irrelevant in those cases. The mere fact that something bad happened is enough to say that someone is liable even if they did not intend the harm. Thus the mere existence of CSAM created by your service is all the evidence that's needed that you, the operator of the service, are liable for it because you had a minimum duty of care.

As far as responsibility is concerned, would it actually matter in law whether the body is offering a service through an AI agent, or through a person? Beyond personal liability for the hypothetical person who might be providing the service in place of the AI.
Substituting a human agent for an AI tool invokes the respondiat superior doctrine. The principal is responsible for the actions of his agent. If I hire an artist to produce unlawful content at a client's request, I am just as responsible as if I had used an AI tool to do it myself.

Remember, it's just software.

Would it be similar to organisations providing services using kids? Presumably cases have occurred with organisations like the girl scouts, and presumably there would be a responsible adult?
I'm not sure how you're trying to tie this into the question. There are, for example, talent agencies that specialize in representing minors for employment in situations where they might be photographed or filmed, and thereby subject to various contrived depictions. In those cases there is quite definitely a need for a legally recognized guardian, and that guardian would be very liable if they knowingly or negligently allowed their minor ward to participate in the production of, say, sexualized content. Additionally the employer often has some responsibility in loco parentis that applies.

In theater, the director and producers of a show have a duty of care for the safety and well-being of minor participants. Minor participants must have guardians with a clear legal duty to care for the minor. Agencies that represent minors in procuring such employment have due diligence requirements.
 
I'm not sure how you're trying to tie this into the question. There are, for example, talent agencies that specialize in representing minors for employment in situations where they might be photographed or filmed, and thereby subject to various contrived depictions. In those cases there is quite definitely a need for a legally recognized guardian, and that guardian would be very liable if they knowingly or negligently allowed their minor ward to participate in the production of, say, sexualized content. Additionally the employer often has some responsibility in loco parentis that applies.

In theater, the director and producers of a show have a duty of care for the safety and well-being of minor participants. Minor participants must have guardians with a clear legal duty to care for the minor. Agencies that represent minors in procuring such employment have due diligence requirements.
I was actually thinking of the opposite situation.

Say one is in charge of kids for a charity car wash and they cause damage for whatever reason. I suspect the responsible adult would most likely be bear the blame. And the younger the kids the more the adult would be considered responsible?
 
In the UK pretty much anything to do with CSAM is a strict liability matter. Which is why Stross was telling people in the UK not to use X, an image being displayed on your screen even with you doing absolutely nothing to select it - as I believe can happen on X - is a criminal offence, saying you didn't request it, you didn't send it, you didn't make it, you didn't search for it matters not one iota in regards to you committing a criminal offence - although it may reduce your sentence if you find yourself in court.
 
Last edited:
Here it's not so much a tool as it is a hosted service. I can sell you a tree shredder and wish you the best of luck as you drive away with it in tow. What you do with it thereafter is not my business and not my responsibility. There are plenty of legitimate reasons to possess and use a tool comprised of whirling sharp blades attached to a powerful motor. But if instead I own a shredder and I offer to shred things with it on your behalf, then I become a moral agent in the shredding decisions. If you show up with a rolled-up carpet with someone's feet sticking out of it, my decision to allow you to use my equipment to shred it then becomes something I'm partially responsible for. Not only can I exercise discretion in that case, I might be obliged to
Nice way of putting what I was trying to express
 
Say one is in charge of kids for a charity car wash and they cause damage for whatever reason. I suspect the responsible adult would most likely be bear the blame. And the younger the kids the more the adult would be considered responsible?
In cases like that, the minor's duty of care would be commensurate to a reasonable person of that same age. Any person held accountable for negligence must be mentally capable of that negligence. As an example, in my state no child younger than 5 years old may be considered negligent for any reason. In other states, 7 years old is the limit. Older than that, minors may be considered liable depending on the facts of the case.

Conversely the nature of the damage matters. Minors can be held more liable for intentional torts than for negligence. For example, it would be questionably negligent of a minor to not know he wasn't supposed to clean the plastic parts of a car with a certain kind of solvent. If the plastic were damaged, it could be a defense for the minor to say, "I didn't know it would cause damage." However, a minor of the same age who intentionally broke a side view mirror by hitting it with a hammer is much more easily held liable.

What you're talking about is negligent supervision of a minor. That occurs when an adult has legal responsibility for the behavior of a minor, whether that's because of a parental duty or—as in your hypothetical—the adult has assumed responsibility by voluntarily taking on a specific role that entails a supervisory responsibility.

Supervision is negligent if the adult does not exercise reasonable care in overseeing what the minors do. This is not a strict liability matter in the U.S. An adult supervisor cannot be everywhere at all times, but is expected to take reasonable steps to govern the behavior of the minor charges. If the adult supervisor is occupied with a kid's request to open a stuck lid on the jar of detergent and, while thus distracted, another kid grabs a hammer and smashes a mirror, it would not likely be the adult's fault.

So as with all questions of law and engineering, the answer is: It depends. The minor can be held liable commensurate with reasonable expectations on his duty of care. The adult supervisor can be held liable commensurate with reasonable expectations on his duty of care.
 
In cases like that, the minor's duty of care would be commensurate to a reasonable person of that same age. Any person held accountable for negligence must be mentally capable of that negligence. As an example, in my state no child younger than 5 years old may be considered negligent for any reason. In other states, 7 years old is the limit. Older than that, minors may be considered liable depending on the facts of the case.

Conversely the nature of the damage matters. Minors can be held more liable for intentional torts than for negligence. For example, it would be questionably negligent of a minor to not know he wasn't supposed to clean the plastic parts of a car with a certain kind of solvent. If the plastic were damaged, it could be a defense for the minor to say, "I didn't know it would cause damage." However, a minor of the same age who intentionally broke a side view mirror by hitting it with a hammer is much more easily held liable.

What you're talking about is negligent supervision of a minor. That occurs when an adult has legal responsibility for the behavior of a minor, whether that's because of a parental duty or—as in your hypothetical—the adult has assumed responsibility by voluntarily taking on a specific role that entails a supervisory responsibility.

Supervision is negligent if the adult does not exercise reasonable care in overseeing what the minors do. This is not a strict liability matter in the U.S. An adult supervisor cannot be everywhere at all times, but is expected to take reasonable steps to govern the behavior of the minor charges. If the adult supervisor is occupied with a kid's request to open a stuck lid on the jar of detergent and, while thus distracted, another kid grabs a hammer and smashes a mirror, it would not likely be the adult's fault.

So as with all questions of law and engineering, the answer is: It depends. The minor can be held liable commensurate with reasonable expectations on his duty of care. The adult supervisor can be held liable commensurate with reasonable expectations on his duty of care.
Pretty much what I had thought, and precedent and laws might vary between the UK and various US states, but the overall principle wouldn't.

Where I was going was hypothetically replacing the AI agent with a child. If I said "my child prodigy 4 year old will paint whatever you ask", if someone asked them to paint something illegal, I'd expect that it would be me that had some liability, assuming for the sake of this hypothetical that I could at any time prohibit my kid from painting certain things.
 
Where I was going was hypothetically replacing the AI agent with a child. If I said "my child prodigy 4 year old will paint whatever you ask", if someone asked them to paint something illegal, I'd expect that it would be me that had some liability, assuming for the sake of this hypothetical that I could at any time prohibit my kid from painting certain things.
Your liability is most broadly determined by your actual involvement in the product. If you simply announce that your child can produce any drawing, and a third party solicits unlawful content from your child without your knowledge or involvement, then the person soliciting would be most liable. This presumes the child has no mens rea. If someone comes to you to solicit a drawing from your child, or you transmit the drawing from your child to the customer, you will have occasion to know about the nature of the request was and/or the nature of the product. You would be expected to exercise some discretion over both the request and the outcome.

In each case—organic child and AI—different legal principles intervene. If you receive the requests for your child's services and/or distribute the products, you are yourself a trafficker. As such you are legally responsible for what you convey in commerce, regardless of what produces it. Does the operator of an AI service constructively receive requests and distribute output? I would say yes. The fact that AI requests are received electronically and processed automatically does not create a meaningful excuse from regulation. And in fact, AI providers are more capable of providing content restrictions via automation than other services because they are inherently more capable of automating the semantic understanding of a request.

As a guardian of the 4-year-old child, you have an obligation to protect the child from corrupting influences. This would involve preventing the child from participating in illegal activity, even if the child does not have the capacity to form a guilty mind over it. Thus the case in which a customer contacts your child directly would indicate a breach in your duty of care, even if the request were innocent. The fact that the child might innocently produced illegal content only speaks louder to the actual injury caused by your bad parenting.

This is analogous to the provider of a service in commerce being aware of and somewhat responsible for the risks inherent in the service—the killbot principle. It's not the same legal obligation as parental duty. In fact it operates in reverse. You are obliged to protect your 4-year-old from the corrupting influence of the world. A service provider is somewhat responsible for protecting the public from the knowable risks of the service.
 

Back
Top Bottom