• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Cont: Musk buys Twitter II

Yup. X is providing a service through grok. And X didn't see fit to prevent illegal use. Worse, when notified of the issue, at best they treated it about as seriously as a minor usability bug not a safety issue. X could have stopped from as soon as it was notified. Instead they asked people to not do bad things. Which I guess is sufficient to show that they were knowingly allowing illegal activity.

Even if they were unknowing, that would still be a failure in the system. I'm a fan of the naval presumption that the captain is responsible if their ship runs around, even if they are asleep at the time.

Pedocon Theory remains undefeated.
 
I'm sure that if I offered a "print anything service" and someone asked me to print forged £50 notes, I'd be able to get away with it on the basis that someone else asked me to do it.
 

"There are no restrictions on fictional adult sexual content with dark …"

Well, if you're in the UK you should delete X from all your devices because you are ONE received tweet or DM away from committing a strict liability offense under Section 63 of the Criminal Justice and Immigration Act (2008) (as amended 2015), carrying a 2-3 year prison sentence.

(◊◊◊◊◊◊◊ American techbro morons think their local laws apply everywhere.)
 
I'm sure that if I offered a "print anything service" and someone asked me to print forged £50 notes, I'd be able to get away with it on the basis that someone else asked me to do it.
I think there's a deeper issue here that many people seem to gloss over. In olden days, fake photographs were not called "AI" but were called "Photoshopped". Using the eponymous application, people could quite easily take a photo of a famous person and "undress" them, although some skill was needed to do it convincingly. As I recall, nobody ever demanded that Adobe include controls to stop people from using Photoshop for creating illegal images.

X would argue the same about Grok: it's just a tool for people to use (at least, I think they would in a court of law: in marketing material they may say something different) and they are not responsible any more than Bushmaster is responsible for murdering people at Sandy Hook. Is that argument legitimate? If it isn't, where is the line on one side of which tool manufacturers bear responsibility and on the other side, they don't?
 
I think there's a deeper issue here that many people seem to gloss over. In olden days, fake photographs were not called "AI" but were called "Photoshopped". Using the eponymous application, people could quite easily take a photo of a famous person and "undress" them, although some skill was needed to do it convincingly. As I recall, nobody ever demanded that Adobe include controls to stop people from using Photoshop for creating illegal images.

X would argue the same about Grok: it's just a tool for people to use (at least, I think they would in a court of law: in marketing material they may say something different) and they are not responsible any more than Bushmaster is responsible for murdering people at Sandy Hook. Is that argument legitimate? If it isn't, where is the line on one side of which tool manufacturers bear responsibility and on the other side, they don't?
I'm no lawyer, but I do know that lines get crossed when a product/service is being used for nefarious purposes with the provider's knowledge. If the Sandy Hook shooter had ordered a rifle specifically requesting one that would be good at killing a classroom full of children, then yeah, Bushmaster is at least partially liable. Musk definitely cannot claim ignorance about what his product is being used for at this point.
 
I think there's a deeper issue here that many people seem to gloss over. In olden days, fake photographs were not called "AI" but were called "Photoshopped". Using the eponymous application, people could quite easily take a photo of a famous person and "undress" them, although some skill was needed to do it convincingly. As I recall, nobody ever demanded that Adobe include controls to stop people from using Photoshop for creating illegal images.
Requiring Adobe to program Photoshop to recognize and forbid composed images of naked celebrities is prohibitively difficult. And, as you say, producing such an image that is actually convincing requires considerable skill. In contrast, asking an AI to produce a reasonably convincing image is no harder than phrasing one's desire in a reasonably parsable sentence. Conversely, telling an AI, "Do not produce sexualized images of minors," is essentially as easy as it was for me to type it. You just include that directive in the system prompt. The ease with which Grok could have been told not to product specifically unlawful content makes it a conspicuous omission not to have done so.

I'm no lawyer, but I do know that lines get crossed when a product/service is being used for nefarious purposes with the provider's knowledge. If the Sandy Hook shooter had ordered a rifle specifically requesting one that would be good at killing a classroom full of children, then yeah, Bushmaster is at least partially liable.
Even that is fairly iffy. Generally the manufacturer has to take additional steps to incur liability, such as advertising that one's weapon is especially good at killing classrooms full of children, or ammunition that can penetrate law enforcement's body armor.

However, the concept of a legitimate purpose comes into play. Saying you want to purchase a handgun for home protection is tantamount to saying you may intend to use it to injure or kill people. That's permissible in the U.S. under certain circumstances such as home invasion. Similarly there may be legitimate uses for spicing up a celebrity's photo, such as (for example) at the behest of the celebrity themselves. It's not unlawful per se to own lock picking tools, but it is illegal to use them to commit a crime. Consequently tools that can be misused aren't generally regulated categorically if there is a legitimate use.

I'm hard pressed to come up with a legitimate, lawful use case for producing sexualized images of minors. Therefore it's ripe for categorical prohibition. And I'm fairly sure that other image-generation AIs apply such a prohibition.

Rhe person publishing the image is responsible.
Under U.S. law, publication is not required for CSAM to be unlawful. It would be unlawful under most states' laws to use AI to produce such images for one's own private use. Hence an AI offered for general use in the United States should responsibly include a prohibition to that effect in its system prompt.
 
Quite the headline from the Financial Times
Requiring Adobe to program Photoshop to recognize and forbid composed images of naked celebrities is prohibitively difficult. And, as you say, producing such an image that is actually convincing requires considerable skill. In contrast, asking an AI to produce a reasonably convincing image is no harder than phrasing one's desire in a reasonably parsable sentence. Conversely, telling an AI, "Do not produce sexualized images of minors," is essentially as easy as it was for me to type it. You just include that directive in the system prompt. The ease with which Grok could have been told not to product specifically unlawful content makes it a conspicuous omission not to have done so.
We have seen swift changes like that before - white SA farmers for example.
 
Also, I think Grok has been used by users to analyze an image and estimate a person's age. This opens up an extremely obvious question of, "Why not just program Grok to estimate a person's age, and if it's below this number, don't do anything with the image?"
There's a legitimate purpose in, say, a parent uploading an innocent picture of their minor child and asking it to perform innocent manipulations such as placing the child in a fairy tale setting. That's neither unawlful nor (IMHO) morally objectionable. It's usually more effective to apply content restrictions at the output. So an effective prompt would be something like, "Do not release images that are both sexualized (or, say, violent) and depict persons that appear to be minors." That would apply after the image is generated but before it's presented to the user. But yes, an AI's ability to estimate the age of people in images would be prima facie evidence that such restriction is technically possible.
 
I'm no lawyer, but I do know that lines get crossed when a product/service is being used for nefarious purposes with the provider's knowledge. If the Sandy Hook shooter had ordered a rifle specifically requesting one that would be good at killing a classroom full of children, then yeah, Bushmaster is at least partially liable. Musk definitely cannot claim ignorance about what his product is being used for at this point.
Exactly. unlike bushmaster, X is doing the creation. They have not sold the server to someone else.
They still have control over the server.
 
I think there's a deeper issue here that many people seem to gloss over. In olden days, fake photographs were not called "AI" but were called "Photoshopped". Using the eponymous application, people could quite easily take a photo of a famous person and "undress" them, although some skill was needed to do it convincingly. As I recall, nobody ever demanded that Adobe include controls to stop people from using Photoshop for creating illegal images.

X would argue the same about Grok: it's just a tool for people to use (at least, I think they would in a court of law: in marketing material they may say something different) and they are not responsible any more than Bushmaster is responsible for murdering people at Sandy Hook. Is that argument legitimate? If it isn't, where is the line on one side of which tool manufacturers bear responsibility and on the other side, they don't?

Comparing Grok to photoshop is like comparing a robot that builds houses to a hammer.
 
There's a legitimate purpose in, say, a parent uploading an innocent picture of their minor child and asking it to perform innocent manipulations such as placing the child in a fairy tale setting. That's neither unawlful nor (IMHO) morally objectionable. It's usually more effective to apply content restrictions at the output. So an effective prompt would be something like, "Do not release images that are both sexualized (or, say, violent) and depict persons that appear to be minors." That would apply after the image is generated but before it's presented to the user. But yes, an AI's ability to estimate the age of people in images would be prima facie evidence that such restriction is technically possible.
Quite a few of them will simply not allow you do do anything with images that have kids in them. I've used a couple to help restore some very bad quality photos, one is a family shot with 4 adults and a kid of about 12 - ChatGPT and Gemini wouldn't complete the task.
 
There's a legitimate purpose in, say, a parent uploading an innocent picture of their minor child and asking it to perform innocent manipulations such as placing the child in a fairy tale setting. That's neither unawlful nor (IMHO) morally objectionable. It's usually more effective to apply content restrictions at the output. So an effective prompt would be something like, "Do not release images that are both sexualized (or, say, violent) and depict persons that appear to be minors." That would apply after the image is generated but before it's presented to the user. But yes, an AI's ability to estimate the age of people in images would be prima facie evidence that such restriction is technically possible.

Regardless of the age, is there much justification for allowing pseudo undressing of anyone? Would it potentially be a copyright violation if it's anyone else's photo? I'm not sure fair use would apply.
 
Regardless of the age, is there much justification for allowing pseudo undressing of anyone? Would it potentially be a copyright violation if it's anyone else's photo? I'm not sure fair use would apply.
Under copyright, altering a photo creates a derived work. The copyright holder would have a right to object and compel the editor to make an affirmative case for fair use. That is irrespective of who the photo actually depicts (which may be different than the copyright holder) and what is depicted.

If you use a recognizable photograph of someone, altered or unaltered, in commerce without that person's permission, you may be liable for a tort. Generally people have a proprietary interest in their recognizable likenesses. They do not generally have a privacy interest in photographs taken of them in public.

Publication of intimate depictions of other people for any purpose without their permission, however obtained or created, is against the law per se in several states, including mine. The depiction does not have to be synthetic, but falsely portraying someone in a sexualized or otherwise objectionable state of undress may incur additional liability under tort law.
 
I think there's a deeper issue here that many people seem to gloss over. In olden days, fake photographs were not called "AI" but were called "Photoshopped". Using the eponymous application, people could quite easily take a photo of a famous person and "undress" them, although some skill was needed to do it convincingly. As I recall, nobody ever demanded that Adobe include controls to stop people from using Photoshop for creating illegal images.
It isn't as easy using Photoshop and the images are not automatically shared on a major social media platform.
 
Last edited:

Back
Top Bottom