• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Isn't that not what most artists would do when creating an image with text in it as an artistic feature?
They do if the text is in a script that is unfamiliar to them. If you're familiar with a script, the thing that looks the most like a T is a T.
 
An eternal problem here is people believing software/systems/whatever salesmen. The sheer unmitigated crap these guys will spin to make a sale continue to ..... well I'd say unbelievable but obviously not to everyone. I took great pleasure when my then boss had security escort a salesman offsite as, once again, the promised live demo turned out be another revamped powerpoint presentation.
 
I thought this podcast was informative:


A quick overview: first they talk about whether AI is already starting to displace workers in some areas, particularly new college graduates just entering the job market. The answer seems to be that it's not obvious yet that it is, but the unemployment rate for new college graduates is rising a bit. It' a couple points higher than 2 or 3 years ago, but hardly the worst we've ever seen, and it's also not clear that AI is responsible for the rise.
They also interview an Anthropic guy, the makers of Claude. Near the end they mention that Elizabeth Holmes' partner (husband or boyfriend? he is the father of her children) is trying to raise capitol for a new startup that looks an awful lot like Theranos 2.0.

Here's an article about that:


The partner of disgraced Theranos CEO Elizabeth Holmes, Billy Evans, has reportedly raised millions of dollars to launch a new blood-testing company that aims to make diagnoses from blood, saliva, and urine, according to The New York Times.

The company, named Haemanthus, has been pitched as "the future of diagnostics," according to two potential investors who spoke to the Times on condition of anonymity.

Sound familiar?
 
Something I've found interesting is how good they are at programming. I was a programmer a long, long time ago and very much an average coder. I've been able to do quite a few programming tasks, with the AIs doing the bulk of work. Now in theory I would have been able to do these tasks myself, but it would have took a lot of effort from me, probably the most effort would be getting back up to speed with a modern IDE and a modern language!

The preamble is because as good as they are they can't still "do it all", they will make mistakes, such as not include a library, functions not properly set up, but I don't think that if I hadn't had prior experience of coding I would have been able to do the "debugging" - which in this scenario means changing prompts e.g. telling it what has gone wrong, telling it what it should be doing - be interesting to see if they can get that last 10 percent.

If folks are interested in this and have Windows you can use CoPilot to generate code for free. I've been using Manus on its free plan and found it better than Copilot.

ETA: One of the tasks I wanted to do was remove duplicate images in my working folder. It took about 20 minutes in all to have a rough and ready "app" up and running to do this.
 
Last edited:
A report into AI-created websites from Loveable showed many had massive security holes including exposing API keys that would allow hackers to run up charges to Lovable customers
 
A report into AI-created websites from Loveable showed many had massive security holes including exposing API keys that would allow hackers to run up charges to Lovable customers
That shouldn't come as a major surprise. It happens enough with (especially) small companies that code stuff from scratch. I don't think it should be that difficult to fix, surely they can start the AI coding with a "security" library that deals with information security so it can only build an app/site with adequate security?

___________________

Last night I started Manus to build a forum app - with very, very little detail in my prompt "create a forum app that allows different sections and threads and posts that users register to use". When I went back to it this morning it was asking me if I wanted to deploy it as website based forum - I answered yes - and it is now doing a whole load of stuff, when it's complete I'll see if it works at all and post a link if possible to it.
 
Not going to be to provide a link - Manus has had issues and now stopped:

I've encountered a deployment issue with the forum application. The latest deployment attempt resulted in a "fetch failed" error, which suggests there might be network connectivity or deployment platform issues.

After multiple attempts to deploy the forum application with different approaches, I'm encountering persistent challenges with the backend database and deployment infrastructure.

Let me summarize what we've tried and what options we have moving forward:
  1. What we've tried:
    • Implemented a Flask backend with RESTful API endpoints
    • Created a React frontend with all necessary components
    • Switched from MySQL to SQLite for simpler deployment
    • Added enhanced error logging and diagnostics
    • Made multiple deployment attempts with different configurations
  2. Current challenges:
    • Backend API returns 500 errors for the sections endpoint
    • Database initialization or access issues in the production environment
    • Deployment platform connectivity issues
  3. ...snip...

The reason it's stopped is because I've ran out of free credits - I'll see if it will pick it up again when my free credits replenish romorrow.
 
That shouldn't come as a major surprise. It happens enough with (especially) small companies that code stuff from scratch. I don't think it should be that difficult to fix, surely they can start the AI coding with a "security" library that deals with information security so it can only build an app/site with adequate security?

Yes but the fact that they didn't start with, say a basic Web project template that included security from the ground up reeks of overconfidence to me.
 
Not going to be to provide a link - Manus has had issues and now stopped:



The reason it's stopped is because I've ran out of free credits - I'll see if it will pick it up again when my free credits replenish romorrow.
I've actually got two working forums from my one sentence prompt! From two different AI agents, they are basic of course and may be bugged to buggery but I am astonished that I've had to do nothing at all apart from the prompt. One gave me a way to "deploy" it live as a website and it worked as expected. It's description of the forum app is:

A comprehensive online forum application, built with Next.js 14, PostgreSQL, and Tailwind CSS, is successfully deployed. It features robust user authentication, hierarchical forum structure, a rich text editor, and an extensive admin panel. Pre-populated content and demo accounts are included, making it production-ready with full moderation and responsive design.

Screenshots from the better one:

Screenshot 2025-06-02 152106.jpg


Screenshot 2025-06-02 152142.jpg

Example thread with post:

Screenshot 2025-06-02 152232.jpg
 
Yes but the fact that they didn't start with, say a basic Web project template that included security from the ground up reeks of overconfidence to me.
An AI can't have overconfidence because it doesn't have confidence. It's just an autocomplete bot.

It didn't start with a security-forward template because it does not reason abstractly about the prompt - certainly it doesn't reason according to a pre-loaded value system that includes things like "don't forget security" and "try to do the right thing".

It's trying to reason its way towards a good implementation. It's mindlessly generating a statistically-likely implementation, based on a corpus of who knows how many things that may or may not be implementations at all.

tl;dr - The AI gives you unsecure code because that's what people write. Especially people writing casually, for an open-source world. Putting secrets in production code is a common mistake that ignorant and/or lazy programmers make.

An actual code-writing bot could include an LLM, but an LLM alone is not a code-writing bot. At best, it's a code-drafting bot.
 
The AI didn't write itself. People designed it to do as it did and didn't include security.
We don't know if that is the case, it might have been that the prompt didn't include security, however if you made that request in your prompt it may have included security. As ever computers do what we ask them to do, which may not be what we wanted them to do! :)


One of the failed attempts lets me share the process it went through, whilst it had problems at the end I think it is quite fascinating to read how the agents reason:


ETA: That's very neat you can see the reasoning on the lefthand side and see what steps the agent was doing on the right-hand side so you can see the code creation and so on.
 
Last edited:
The AI didn't write itself. People designed it to do as it did and didn't include security.
Yes, because it's not that kind of application. It's an autocomplete bot, not a best-practices web developer. Nobody involved in developing this "AI" set out to write a best-practices web developer bot. Nor should they have.

If you want an AI-powered best practices web developer, you need to add those components to a complete integrated system. Either in the form of an experience web developer who can add best practices to an autocompleted first draft. Or in the form of additional code that performs a similar function.
 
One of the failed attempts lets me share the process it went through, whilst it had problems at the end I think it is quite fascinating to read how the agents reason:


ETA: That's very neat you can see the reasoning on the lefthand side and see what steps the agent was doing on the right-hand side so you can see the code creation and so on.
I get an error when I follow the link.
 
We don't know if that is the case, it might have been that the prompt didn't include security, however if you made that request in your prompt it may have included security. As ever computers do what we ask them to do, which may not be what we wanted them to do! :)
When was "The Night Before Crisis" written again?
Then the client turned round with a snarl and a taunt.
It's just what I asked for but not what I want.
 

Back
Top Bottom