• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

It is difficult to know who you are chatting with. I was using the support for my antivirus product, and after a while it said, hang on, I'm transferring you to human agent. The human agent took a long time with his answers, so I could see him typing, stopping, then typing again, just like human would. Then after a while he wrote that I would be transferred to the second level support. This was a woman who started by saying, I'm sorry our support bot could not help you, please give me some time to review the issue.

So, was their human support a bot after all, despite the slow typing? Or did the second level supporter not know that they have human first level supporters?
 
Just saw a promotional clip from DARPA, where their head of fighter research describes at a high level how they're using a physical prototype, a digital twin, and an AI model to create a real-time feedback loop of prediction, instruction, result, prediction.
 
Peter Navarro: "We're looking very very carefully at this whole problem of AI data centers driving up the cost of electricity for Americans. You can expect strong action from President Trump on this. It's amazing in a bad way how much electricity is projected for the AI folks to use, and a significant amount of electricity these AI centers are using is serving ChatGPT users in places like India and China."

 
Google's AI nonsense:

Search for "PTC inrush current limiter"

AI Overview:

A PTC (Positive Temperature Coefficient) inrush current limiter uses a special thermistor that has high resistance when cold (limiting the initial current spike when a device powers on) and low resistance when hot (allowing efficient normal operation), acting like a self-resetting fuse to protect components like capacitors and rectifiers from damaging surges, often bypassed after the initial charge.

This is a weird mash-up of how NTC and PTC thermistors used in current limiter applications operate.

LLMs are merely hollow shells that impersonate intelligence.
 
Google's AI nonsense:

Search for "PTC inrush current limiter"

AI Overview:



This is a weird mash-up of how NTC and PTC thermistors used in current limiter applications operate.

LLMs are merely hollow shells that impersonate intelligence.
LLMs are merely hollow shells that impersonate intelligence mimic human behaviour, which is why they lie, why they get confused, why they sound confident when making ◊◊◊◊◊◊◊◊ up and so on.
 
The ability to halucinate is specific to LLMs, it's not a human quality being copied .. it's simply a bug. But IMHO it will be fixed eventually ..
...in 5-10 years, much like fusion energy since the start of fusion energy research.

I posted that example because it shows that LLMs don't understand or reason, but instead produce word salads directed by prompts. This is probably why those in upper management and politicians think they're great.

I dread to think the poor decisions that have and will be made based on AI summaries. E.g., Wes Streeting is keen for GPs to use AI to summarise consultations with patients. What could possibly go wrong!?
 
...in 5-10 years, much like fusion energy since the start of fusion energy research.

I posted that example because it shows that LLMs don't understand or reason, but instead produce word salads directed by prompts. This is probably why those in upper management and politicians think they're great.

I dread to think the poor decisions that have and will be made based on AI summaries. E.g., Wes Streeting is keen for GPs to use AI to summarise consultations with patients. What could possibly go wrong!?
Generally speaking, no. The output is not "word salad". A useful application is translation from one language to another. It tends to be surprisingly good at this. Provided, of course, that it makes sense in the original language. (I'm not saying that translation is the only useful application, but it is one.)
 
The ability to halucinate is specific to LLMs, it's not a human quality being copied .. it's simply a bug. But IMHO it will be fixed eventually ..
Sure about that?
Looking at MAGA cult officials and spokespersons, they pretty much operate like that. Hallucinating absolute ◊◊◊◊◊◊◊◊ and presenting it with confidence and some eloquence.
 
Sure about that?
Looking at MAGA cult officials and spokespersons, they pretty much operate like that. Hallucinating absolute ◊◊◊◊◊◊◊◊ and presenting it with confidence and some eloquence.
Yes, but they heard the nonsense somewhere else and they believe it. LLMs make it on the spot and the main reason they do it is they don't really track how confident they are about anything. So in the end it might be similar (even if better stylistically in case of LLMs) .. but the cause is different.
 
I dread to think the poor decisions that have and will be made based on AI summaries. E.g., Wes Streeting is keen for GPs to use AI to summarise consultations with patients. What could possibly go wrong!?
Depending on the exact use that might be fine. If it's something like "here's an audio recording of the consultation, print a bulleted list of medication dosage and schedule," that's within current capabilities. Expecting it to look up drug interactions or point out things the doctor missed (tasks filled by nurses these days), not so much.
 
Isn't that something AI should be relied on for?
I'm not sure America has an equivalent of the BNF (British National Formulary) the reference manual for doctors on drug recommendations and interactions. Certainly a BNF check could be done but it wouldn't require an AI - just a query against against tables assuming they're in a suitable format.

Just checked and maybe AI could help as I spotted this for one of my meds
Warning
Combination products, for example co-amilofruse (amiloride+furosemide), do not appear in this list. You must check interactions with each constituent medicine.
 
Depending on the exact use that might be fine. If it's something like "here's an audio recording of the consultation, print a bulleted list of medication dosage and schedule," that's within current capabilities. Expecting it to look up drug interactions or point out things the doctor missed (tasks filled by nurses these days), not so much.
That's not the plan. The plan as I understand it is to use AI to replace GPs having to type up notes in the patient record, which are a summary of the consultation. This requires the AI to understand what is medically relevant. Given many patients waffle and report things such as the onset of symptoms as "...it started after I got back from holiday...", I think there is a high likelihood of the AI missing important details and including irrelevant waffle.
 
I'm not sure America has an equivalent of the BNF (British National Formulary) the reference manual for doctors on drug recommendations and interactions.
They do, and it's available online.
But I see an AI interface ( not LLM ) where the MD fills out any number of boxes for a particular patient's visit that day, and a new prescription would automatically be checked against the drugs the patient is already taking.

Chances are, this is already being done in some arenas.
Not an llm, which I gather they mean by the term.
Why are you gathering that?

Why do we have to specify LLM or not, whenever we talk about a task AI can perform?

Maybe we need an " LLM " forum to distinguish such discussions as a subset of AI research/development.
 
They do, and it's available online.
But I see an AI interface ( not LLM ) where the MD fills out any number of boxes for a particular patient's visit that day, and a new prescription would automatically be checked against the drugs the patient is already taking.

Chances are, this is already being done in some arenas.
I'd be very surprised if EMIS and SystmOne don't do that. They're the main UK GP systems and I knew some of their key guys.
 

Back
Top Bottom