• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Seems like a few teething problems and it emphasised the weakness of journalism that thinks journalists and all stories have to show balance "both-sidesism". The AI did exactly what it was told to do. This wasn't AI failing, but human error.

Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
 
Yeah, I think this is common for LLMs. Math isn't usually their strong suit. But we have calculators to do that, or software specifically designed to solve math problems. WolframAlpha might be better suited to solving problems that involve math.
It's not a math error in the usual sense. Like I said, it seems like it knows the question is in the format of a homework question, and so it carefully seeds in mistakes that must be corrected in order to arrive at the right answer. The formula that I showed above was the correct formula, but they calculated it out exactly $240 wrong. Given that they were otherwise correct to three decimal places, that seems intentional. The less sympathetic explanation is that it did it maliciously.
 
I once tried to get GPT to talk like a robot but due to its training data - it had no idea how to do that.
Now I talk to AI as if it were a robot - and find this gives a better user experience.
 
It's not a math error in the usual sense. Like I said, it seems like it knows the question is in the format of a homework question, and so it carefully seeds in mistakes that must be corrected in order to arrive at the right answer. The formula that I showed above was the correct formula, but they calculated it out exactly $240 wrong. Given that they were otherwise correct to three decimal places, that seems intentional. The less sympathetic explanation is that it did it maliciously.
The more realistic explanation is that it's just not good at math.

Also LLMs lack intent, malicious or otherwise.
 
AIs now have a secret language that they can talk to each other in.


 
Since AI “hallucinating” has been asserted since its inception, has it has built a case for itself to validly plead temporary insanity?
 
Co-pilot app is now adding in adverts. I was writing my mother's mothers'day card yesterday and I was wondering which countries had a mothers' day card tradition, and I had Co-pilot open so I asked it. After getting an answer underneath that answer a paragraph appeared clearly labelled as Microsoft Advertisement that used AI to generate text for advertising Moonpig, which is an online card ordering company, in the text it incorporated part of my query.
 
ALL LLM outputs are hallucinations (unless they are pure copy&paste) - but just like the Oracle of Delphi, we are able to extract useful content out of some.
 
I posted this image I submitted to the AI image thread on my Facebook page because I thought it was fun and cute, but I got some criticism for it from a couple of people. So I ended up writing this as a response:



I posted an AI-generated picture yesterday and got some blowback for it. Allow me to go into more detail in my reasoning.

First, I believe that it is a mischaracterisation to say that generative AI steals intellectual property from genuine artists. You cannot point to any part of my picture and say that it was copied from a specific piece of art by a particular person. That's not what generative AI does. It uses art by people as training data, and then generates a new picture by calculating how much what it's generating looks like that art. So when I ask Dall-E to generate a picture of a dachshund, it examines all of the pictures that have been identified as dachshunds in its training data, then generates a new image that looks as much like them as possible. This is basically how human artists learn, too - by examining and attempting to emulate existing art by other artists.

No piece of generated AI art is an exact replica of anything in particular. Copyright is not an issue because nobody can point to the thing that is being copied. I do not believe this has been directly challenged in court, but I am fairly confident that a copyright claim against generative AI would not stand up.

Second, I believe that it is dishonest to use generative AI for commercial purposes. If I'm writing a book, using ChatGPT for large parts of it would be deceitful and dishonest. If my book includes artwork, I should be paying a real artist rather than using generative AI.

For personal and private purposes, I don't see a problem. My Facebook page is not being viewed by anybody except my family and friends, it is not used to make money. If I want to generate a quick image of something to give the players in my D&D game a visual prop, I can do that, and have done that. It doesn't go outside the table.

Third, AI uses an awful lot of energy to do what it does. This is bad for the environment. So is driving my car. We all have a threshold up to which the amount of environmental damage caused by our activities is acceptable to us. These thresholds may differ from person to person, but everybody has one.

Facebook is demonstrably bad. It is bad for privacy, it is bad for democracy, and it puts money in the pockets of bad people. Yet here we all are - you reading this and me writing it - still using Facebook, because it is convenient.

AI tools are ubiquitous now. Many modern devices come with them built in. They're getting harder to avoid, and less worth the inconvenience of doing so. My internet search engine of choice is no longer Google - another bad and AI-riddled company - but Copilot. The latest update to my phone has AI built in. The energy is being used, whether I like it or not.

AI tools are powerful and have great potential in fields like medicine (drug discovery), finance (fraud detection), and yes, climate change mitigation, because one of the things that AI is very, very good at is analysing very large amounts of data. Tasks that would take human researchers years have been completed by AI in minutes.

I will be hiding the AI generated image I posted from view, because I don't like provoking arguments with people I consider family and friends (who are the only people reading this). I will never use generative AI for commercial purposes. But to think that this cat can be put back into the bag is irrational, in my opinion. AI is a tool, and it is here, today. It is now our responsibility to work out how best to incorporate it into society and culture as safely and effectively as possible. Because it's not going away.
 
I consider it bad form to post AI images with the expectation of people looking at them. It's kind of like posting images of peculiar dog turds. You're free to do it of course, but I don't think I'll be hanging out on your site in the future. And a warning would have been nice. I was eating lunch.
 
I consider it bad form to post AI images with the expectation of people looking at them. It's kind of like posting images of peculiar dog turds. You're free to do it of course, but I don't think I'll be hanging out on your site in the future. And a warning would have been nice. I was eating lunch.
I literally posted it in the AI Image Creation Contest thread. Apart from my friends-locked Facebook page that's the only place I posted it. If you weren't prewarned by that, I don't know what else I can do.
 
I literally posted it in the AI Image Creation Contest thread. Apart from my friends-locked Facebook page that's the only place I posted it. If you weren't prewarned by that, I don't know what else I can do.
I was referring to people objecting to an AI image on the Facebook page, and what particular objections I might have had. I don't use Facebook, so I don't exactly know how that works. But personally, I find it horribly annoying to encounter AI images in a variety of situations.
 
I was referring to people objecting to an AI image on the Facebook page, and what particular objections I might have had. I don't use Facebook, so I don't exactly know how that works. But personally, I find it horribly annoying to encounter AI images in a variety of situations.
What you mean is that you object to AI images the creator and/or poster made no effort to hide the fact that it is purely artificial - you have almost certainly encountered AI images, didn't realize and therefore had no strong reaction. Maybe the style ia something you object to - even if it was made by a human using a drawing tool.

I have no idea how you can avoid seeing similar images in the future - because they are easy to generate they will proliferate.
 
To be fair, most AI generated images have a style to them that is pretty distinctive. And it's not just the wrong number of fingers - AI has been getting better and better at that sort of thing on an almost daily basis. But AI has a... smoothness... that human-made art does not have. I don't know how to describe it but I can see it.
 
What you mean is that you object to AI images the creator and/or poster made no effort to hide the fact that it is purely artificial - you have almost certainly encountered AI images, didn't realize and therefore had no strong reaction. Maybe the style ia something you object to - even if it was made by a human using a drawing tool.
I'd argue it's still hard to miss with most of them (there are obviously exceptions).

In the end I dislike the process, like buying a purportedly handmade sculpture that was actually mass produced in China.

I have no idea how you can avoid seeing similar images in the future - because they are easy to generate they will proliferate.
That is part of my objection, yes. A future hellscape filled with dog turds. I don't approve of actions that hasten us towards it.
 

Back
Top Bottom