• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

They do, and it's available online.
But I see an AI interface ( not LLM ) where the MD fills out any number of boxes for a particular patient's visit that day, and a new prescription would automatically be checked against the drugs the patient is already taking.
Why do we have to specify LLM or not, whenever we talk about a task AI can perform?
Because "AI" on its own doesn't mean a damn thing. It's a marketing term. The feature you describe above isn't AI. It's just basic software. You might use AI to do it, since unstructured information is difficult to parse, but the AI you'd use to do it... is an LLM.
 
Because "AI" on its own doesn't mean a damn thing.
Agree.
...unstructured information is difficult to parse, but the AI you'd use to do it... is an LLM.
The "Physicians' Desk Reference" is structured, so no, you would not use an LLM to glean information from it.

You might use something like IBM watsonx.ai, which is not an LLM, but does use them.
 
Google's AI nonsense:

Search for "PTC inrush current limiter"

AI Overview:

A PTC (Positive Temperature Coefficient) inrush current limiter uses a special thermistor that has high resistance when cold (limiting the initial current spike when a device powers on) and low resistance when hot (allowing efficient normal operation), acting like a self-resetting fuse to protect components like capacitors and rectifiers from damaging surges, often bypassed after the initial charge.

This is a weird mash-up of how NTC and PTC thermistors used in current limiter applications operate.

LLMs are merely hollow shells that impersonate intelligence.

I use lumo.proton.me. I prompted it with "PTC inrush current limiter" and it gave me a an impressive amount of output. It uses its own output formatter, so I asked it for BBcode. Despite the verbiage, it made the same mistake Google did (see the Common Applications table below.)

lumo.proton.me said:
What a PTC In‑rush Current Limiter Is

A PTC (Positive Temperature Coefficient) thermistor is a semiconductor device whose resistance rises sharply when its temperature exceeds a certain threshold. When it’s used as an in‑rush current limiter, the component behaves like this:

PhaseWhat HappensWhy It Works
Initial power‑onThe PTC is cold, so its resistance is very low (typically a few Ω or less). This allows the supply voltage to reach the downstream circuit quickly, but the low resistance also limits the instantaneous surge because the device still presents a finite impedance.The material’s intrinsic resistance at room temperature is modest, so the initial surge is reduced compared with a plain short.
Current flowAs current passes, the device self‑heats (Joule heating). Once the temperature reaches the trip point (the “switch‑over” temperature), the resistance jumps dramatically—often by two orders of magnitude or more.The positive‑temperature‑coefficient characteristic means resistance ∝ temperature. The rapid rise in temperature pushes the device into its high‑resistance region.
Steady‑stateAfter tripping, the PTC presents a high resistance (hundreds of Ω to kΩ), effectively limiting the current to a safe, low level. The downstream circuit now sees a normal operating voltage because the voltage drop across the PTC is small relative to the supply.In the high‑resistance region the device behaves like a resistor that limits current according to Ohm’s law (I = V/R).

When the load is turned off and the PTC cools, it returns to its low‑resistance state, ready for the next power‑on event.

Typical Characteristics (Qualitative)

ParameterTypical Range / Description
Cold‑state resistance (R₍c₎)< 10 Ω (often 0.1–5 Ω)
Trip temperature (Tₜ)60 °C – 120 °C, selectable by material formulation
Hold current (Iₕ)Maximum current the device can carry indefinitely in the high‑resistance state (often 0.1 × rated current)
Maximum (rated) current (Iₘₐₓ)Specified by the part; typical values 0.5 A – 10 A for surface‑mount devices, higher for larger through‑hole parts
Resistance after trip (R₍t₎)100 Ω – kΩ, depending on size and material
Voltage ratingUp to several hundred volts for larger devices; lower for SMD versions
Response timeTens of milliseconds to a few seconds to reach the high‑resistance state, depending on current and thermal mass

These ranges are generic; exact numbers come from each manufacturer’s datasheet.

Common Applications

ApplicationWhy a PTC In‑rush Limiter Is Chosen
Power supplies & adaptersPrevents large charging currents when bulk capacitors charge, protecting diodes, fuses, and upstream transformers.
Motor drives & actuatorsLimits the surge that occurs when a motor winding is energized, reducing stress on contacts and drivers.
LED lightingControls the initial surge that can damage LED drivers or cause flicker.
Battery chargersReduces the in‑rush to the charger’s input stage, helping meet IEC/UL safety requirements.
Automotive electronicsHandles the high‑current spikes when a vehicle’s ignition or infotainment system powers up.
Telecommunications equipmentProtects sensitive front‑end circuitry during line power restoration.

Advantages Over Alternatives

FeaturePTC LimiterSeries ResistorNTC Thermistor (soft‑start)Active MOSFET Soft‑Start
Self‑resettingYes (cools → low R)No (needs replacement)Yes (but resistance falls with temperature)Requires control circuitry
Simple wiringTwo‑terminal passiveTwo‑terminal passiveTwo‑terminal passiveNeeds driver IC
Robust against overloadTrips to high R, limiting currentFixed R → may overheatR decreases with temperature → can exacerbate overloadDepends on controller design
CostLow to moderate (depends on rating)Very lowLowHigher (controller + MOSFET)
SizeSmall SMD or through‑hole optionsSame as resistorSimilar to resistorLarger due to active components

Selecting a Suitable PTC

1. Determine the maximum steady‑state current your downstream circuit will draw. Choose a PTC whose rated current (Iₘₐₓ) is at least that value, preferably with a safety margin (~20%).

2. Identify the expected in‑rush current (often the capacitor charging current). The PTC’s hold current (Iₕ) should be higher than this peak, otherwise the device will trip prematurely.

3. Pick a trip temperature that matches the thermal environment. For ambient temperatures near 40 °C, a device with Tₜ ≈ 80 °C gives enough headroom.

4. Check voltage rating to ensure it exceeds the highest line voltage the part will see (including transients).

5. Consider physical constraints – board space, mounting style (SMD vs. through‑hole), and thermal dissipation.
Manufacturers such as Bourns, Littelfuse, Murata, and TE Connectivity publish families of PTC in‑rush limiters with detailed tables that map these parameters. Their datasheets also include derating curves for temperature and voltage, which are useful for robust design.

Practical Design Tips

- Parallel Capacitors: If you have a large bulk‑cap bank, split it into smaller groups and place a PTC in series with each group. This spreads the heating load and improves reliability.
- Thermal Coupling: Keep the PTC away from heat‑generating components (e.g., power MOSFETs) to avoid premature tripping.
- Fuse Coordination: Pair the PTC with a fast‑acting fuse sized for the maximum allowable fault current. The fuse protects against catastrophic failures while the PTC handles normal surges.
- Testing: Verify the in‑rush behavior on a bench power supply with a current probe. Observe the voltage across the PTC during turn‑on to confirm the expected resistance transition.
- EMI Considerations: The rapid change in resistance can generate a brief voltage spike. Adding a small snubber (RC network) across the PTC can tame ringing in high‑frequency circuits.

Summary

A PTC in‑rush current limiter is a passive, self‑resetting device that reduces the damaging surge that occurs when power is first applied to a circuit. By exploiting the positive temperature coefficient of its semiconductor material, it starts with low resistance, heats up under current, and then jumps to a high resistance, thereby limiting the steady‑state current. It’s widely used in power supplies, motor drives, LED drivers, and many other applications where protecting upstream components from a brief but large current spike is essential.

If you need a specific part recommendation or want to compare datasheets for a particular current/voltage rating, let me know the exact requirements (max steady‑state current, expected in‑rush, voltage, package type), and I can fetch the latest details using the Proton information service.
 
Last edited:
I would say that there is a fundamental difference between LLMs just making ◊◊◊◊ up and humans hallucinating something.

The latter can be generalized as recognizing a pattern where there is none, a hypersensitivity to a specific input, lowering the threshold for network for neurons to fire if a sufficiently similar input is registered. A human might see a face in general in a cloud, but he would never say: "yep, that's definitely uncle Howard up there floating around. He looks so much better now that he is water vapor."

LLMs, on the other hand, hallucinate something very specific as a result of combing various data points into something that doesn't exit in the data. And LLM falsely says: "scientist x published y paper in journal z", because said scientist published similar papers in similar journals. It doesn't say "scientist x worked on subject y, so if you come across a paper on that topic, chances are they are involved."
 
A human might see a face in general in a cloud, but he would never say: "yep, that's definitely uncle Howard up there floating around. He looks so much better now that he is water vapor."
Not disagreeing with you in general about LLM and human hallucinations being different, but there's Jesus in a tortilla, man. Humans are in fact that dumb. If you want to stick to weather, you don't even have to go back to last century to find people unironically arguing that Florida gets hurricanes because their god hates gay nightclubs in Miami.
 
Last edited:
you really, really need to know if your tortilla has been made from human faces or not.
Yes, humans are that stupid. Entire faiths have schismed over whether wine and a cracker is merely symbolic of body and blood, or whether it literally turns into the flesh of a specific human who died thousands of years ago the second everyone stops looking.

We just had a global pandemic exacerbated by people taking horse medicine meant to cure worms, despite every actual expert telling them that's ◊◊◊◊◊◊◊ stupid don't do that, because they convinced themselves it would work anyway. Somehow.

LLM hallucinations are a real problem with their current presentation as oracles - "here's the answer no need to search" - but they still give better advice than your racist uncle on Facebook.
 
Last edited:
The nearest human analogue to LLM hallucinations I can think of is human memory. We access our memories read/write not read only and rebuild them from bits and pieces. Then the reconstructed memory is what we remember. Not that close but still.
As I said false memories. We are so convinced about our "hallucinations" that we prosecute people based on them, we defend them with outrage and even define a class of them (Mandela), then we have the likes of flat-earthers, the religious convinced they'd heard god and so on.

The LLMs mimic human behaviour, that in the past we called human intelligence - that's why I've kept saying we don't want AIs that mimic "human intelligence" we want them much, much better than that. (And that's why I think the company to bet on is Google.)
 
As I said false memories. We are so convinced about our "hallucinations" that we prosecute people based on them, we defend them with outrage and even define a class of them (Mandela), then we have the likes of flat-earthers, the religious convinced they'd heard god and so on.

The LLMs mimic human behaviour, that in the past we called human intelligence - that's why I've kept saying we don't want AIs that mimic "human intelligence" we want them much, much better than that. (And that's why I think the company to bet on is Google.)
In my experience those who score high on extroversion "hallucinate" far more than introverts, who are more just the facts, ma'am.

I was at a management training conference and we were sorted into a line of those who scored highest on introversion (which was me, incidentally) to those who scored highest on extroversion. We were then shown a photo of two young girls wearing light blue dresses sitting on wooden steps in front of a wooden house and then asked to describe the scene. As we went round the room from introverts to extroverts the descriptions got ever more elaborate, with the person who scored highest on extroversion describing the scene as two sisters sitting on the steps of their grandmother's house.
 
Not disagreeing with you in general about LLM and human hallucinations being different, but there's Jesus in a tortilla, man. Humans are in fact that dumb.
Pareidolia is neither a hallucination, nor is it being dumb. It's something every human being experiences. We are hardwired to identify faces. Evolutionarily it is more advantageous to see faces where they don't exist than to not see faces where they do. Ascribing mystical or religious properties to simple pareidolia is what's dumb, not the pareidolia itself.

And hallucinations in humans aren't pareidolia. A hallucination is a dysfunction of the sensory apparatus. When someone hallucinates spiders, they actually see spiders. They don't just think they see spiders, they actually see them.

1768438120151.png
 
The nearest human analogue to LLM hallucinations I can think of is human memory. We access our memories read/write not read only and rebuild them from bits and pieces. Then the reconstructed memory is what we remember. Not that close but still.
It seems to me that human dreams may be a closer analog to LLM hallucinations. To me, dreams have a very Markov-chain feel to them, as my brain is constantly choosing what happens next based on what just happened. No planning, just sequential extrapolation based on what's happened in the dream so far, whatever is going on in my life at the moment, my mood, etc. As I understand it, that sort of sequential extrapolation is how LLMs choose the next word.
 
Pareidolia is neither a hallucination, nor is it being dumb. It's something every human being experiences. We are hardwired to identify faces. Evolutionarily it is more advantageous to see faces where they don't exist than to not see faces where they do. Ascribing mystical or religious properties to simple pareidolia is what's dumb, not the pareidolia itself.

And hallucinations in humans aren't pareidolia. A hallucination is a dysfunction of the sensory apparatus. When someone hallucinates spiders, they actually see spiders. They don't just think they see spiders, they actually see them.

View attachment 68083
https://en.wikipedia.org/wiki/Hallucination
"dysfunction of the sensory apparatus" they aren't, they are what your quoted source says they are. In a visual hallucination my sensory apparatus i.e. eyes aren't malfunctioning.

That aside I would have much preferred if they had used the word "malfunctioning" for LLM AIs malfunctioning, using a word like hallucination helps feed into the idea that these LLMs are doing more than mimicking human behaviour.
 
Last edited:
Well there's many AI malfunctions. This is just on of them. Also it's technical term, it's logical a term from human psychology was used, even if it's not exactly the same. It's still useful, and it's common practice everywhere.
 

Back
Top Bottom