• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

V.geeky. I had to parse the paper title through an LLM just to work out what the subject was
For fun I asked Copilot to summarise it, it did and in effect hid the sarcasm and irony in the original and produced a sanitised version without the precautionary warnings. In other words like many humans reading it would have done it failed to understand the meaning of the article. And when I then let it lead me down the rabbit hole it got more and more detached from the meaning of the article. A very good example of how relying on AI summaries can be problematic, and given how the commercial companies are pushing the use of AI to summarise complex subjects it should be causing alarm bells.

tldr? The summary of the above by copilot

**Summary:**
The text describes an experiment in which Copilot was asked to summarize an article. Instead of capturing the original text's sarcasm, irony, and important cautionary details, the AI produced a sanitized version that stripped away these nuances. As a result, the summary lost the original meaning, demonstrating a significant disconnect that mirrors common human oversights. The author uses this example to underscore the potential dangers of relying on AI to summarize complex subjects, especially as commercial entities increasingly push for its use, raising serious concerns about the accuracy and integrity of such AI-generated interpretations.

**Additional Thoughts:**
This example brings up interesting questions about context and nuance in language. How can AI be improved to better grasp subtleties like tone? And what measures should be in place when summarizing complex texts to ensure important elements aren’t lost? These considerations are particularly significant as AI tools become more integrated into media and education.
 
Alphaevolve has been discovering or improving solutions to real-world problems - using Gemini 2.5. Google has already implemented some of the solutions it has produced, for example increasing efficiency in its TPU designs and in its real-world datacentres.
 
This is a use of AI I think think I approve of.

I Built a Bot Army that Scams Scammers


The scammers spend forever trying convince the AI person to give them money.

The downside I see is that soon AI bot scammers will be interacting with AI bot scamees and the Internet will melt down as all the bandwidth gets sucked up by fake conversations.

:chores002:

Not that that is a bad thing.
 
Last edited:
They are getting spookily better, just asked Manus AI to create me a new theme for my Shopify online shop, all I gave it was the current shop url and it’s created a complete theme, with instructions etc.
 
I'm not sure if anyone here will care, but ChatGpt is not allowed to delete user prompts etc
 
I'm not sure if anyone here will care, but ChatGpt is not allowed to delete user prompts etc
Not allowed to delete the output, they can delete the input so your prompts can still be deleted... I don't know if the court's decision is warranted or not as I've not followed the case.
 
Last edited:
I thought it was interesting and in fact a little bit funny. If it's not true, then it's still a little bit funny.
Yes, people even make entire careers out of telling fictional narratives for the amusement of others.

My concern is that it's very easy for us to fall into the same trap that LLMs do: Being saturated with untrue/unsupported/unverifiable claims, which lead us to hallucinate a false reality. So it bothers me when, in an ostensibly rational discussion about AI, people post engagingly humorous anecdotes that seem truthy but are not actually known to be true.

To my mind, that's one of the ways misinformation spreads, and people end up living in a "post truth world".
 
Yes, people even make entire careers out of telling fictional narratives for the amusement of others.

My concern is that it's very easy for us to fall into the same trap that LLMs do: Being saturated with untrue/unsupported/unverifiable claims, which lead us to hallucinate a false reality. So it bothers me when, in an ostensibly rational discussion about AI, people post engagingly humorous anecdotes that seem truthy but are not actually known to be true.

To my mind, that's one of the ways misinformation spreads, and people end up living in a "post truth world".
While I do not disagree, this is heading towards a world where you can't say anything off the cuff. Everything has to be backed up with robust sources and citations to peer-reviewed journals and I don't think anybody's actually going to do that.

Yes, I realise this sounds a bit like the culture war "you can't say [x] any more" bull that we often get from the right.
 
From time to time I try out AI image generators. It's often frustrating as most quite rightly wish to recover their costs of running the site and the AI by asking at least for users to sign up, and then for money. (Side note: an awful lot of sites don't tell you about the sign-up requirements until you try to generate your first image, which is just rude.)

They can be really good at things like "Romantic palace by the seashore" and "Ladies in Victorian dress playing croquet in a lake.":). But all to often they cannot follow longer prompts.

For example, I asked the image generator at deepai.org to generate a picture of a "standard 102 key computer keyboard, but extended to include an extra row of keys above the space bar with the labels Super, Hyper, Sub, Over, Under, Var, Con, Charm, Strange, Chg, Compose." (This is a joke, taking the Space-cadet keyboard and turning it up to eleven. The Con key is for "Concise" and "Chg" is for "Change," and it's up to the OS and application to decide what to do if, for example "Concise+A" is pressed. Or "Ctrl-Alt-Shift-Meta-Super-Hyper-Sub-Under-Var-Concise-Charm-Change-keypad-3, using all 10 fingers and your nose. :p Yes, I have a strange sense of humour.)

All too often it doesn't even give me a 102 key keyboard with the 11 new keys, but a trimmed down one with 72 to 80 keys. Typically the standard row of 12 function keys (F1-F12) is missing, with only the Esc key showing. The extra row I asked for is totally missing, and only rarely do any keys appear with my requested labels.

As another test, I asked for "Playing card. Suit IS NOT ONE OF SPADES, CLUBS, DIAMONDS, OR HEARTS. The suit is KEYS. Similar to JACK, but use a lady in the French style. Ergo, the card is the Lady of Key, using L as the abbreviation and a key as the pip."

About the only thing it gets right is using a lady on the main body of the card. Very often she is holding a key, which is not what I intended. All too often the returned image has a "J" instead of an "L" for the card's value, and always gives one of the North American classic suits of clubs, diamonds, hearts, or spades. Sometimes more than one on the same card.

If anyone has access to a good image generator, I'd love to know what happens when you give them these prompts.
 
Last edited:
Latest and the claimed best OpenAI generator

1000015203.jpg
Prompt was "standard 102 key computer keyboard, but extended to include an extra row of keys above the space bar with the labels Super, Hyper, Sub, Over, Under, Var, Con, Charm, Strange, Chg, Compose."
 
Last edited:
How did “Strange” become “Sttarge”? And we have “Compere”, “Somis”, and “Bockspace”.

Clearly, letters aren’t its strong side. Perhaps they should teach it to read and write?
 
How did “Strange” become “Sttarge”? And we have “Compere”, “Somis”, and “Bockspace”.

Clearly, letters aren’t its strong side. Perhaps they should teach it to read and write?
Compared to many previous generations they have improved text and especially specified text within images a lot. It does show how they have no understanding of what they are producing. If a human was mocking up such a keyboard the text would be 99% accurate (not a hundred percent as people make errors and mistakes as well as generative AIs).

I've been looking to see if any of the generative AIs use or separate out a "text pass" when generating an image. In other words a "cut and paste" approach to ensure text is 100% accurate when compared to the prompt.

For example If I ask an AI to describe the text on the bottom two rows of keys in the image above it can do so:

The text on the bottom two rows of the keyboard is as follows:
**Bottom row:**
- Comps
- Spacebar
- Chg
- Compare
**Second to bottom row:**
- Super
- Hyper
- Sub
- Over
- Under
- Var
- Con
- Charm
- Srtarge
- Compare
So it is entirely possible when requesting specific text in an image for an AI to check the image contains the correct text.
 
How did “Strange” become “Sttarge”? And we have “Compere”, “Somis”, and “Bockspace”.

Clearly, letters aren’t its strong side. Perhaps they should teach it to read and write?
These things aren't thinking. They're not reading, or doing math, or anything like reasoning about the prompt. They're mechanically finding a statistically likely fulfillment of the prompt.
 
These things aren't thinking. They're not reading, or doing math, or anything like reasoning about the prompt. They're mechanically finding a statistically likely fulfillment of the prompt.
Also, they don't know what words or letters look like. They're not producing writing. They generate an image that is like writing.
 

Back
Top Bottom