• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

As can humans, indeed I’d say in regard to spouting complete nonsense humans still win over AIs.
My beef with LLMs is that they are so sure of themselves. They never express uncertainty. It reminds me of some posters here, who can also spout nonsense with absolutely certainty. But I wonder if the certainty of LLMs come from a sort of business choice, or if it has proven difficult to avoid it.
 
My beef with LLMs is that they are so sure of themselves. They never express uncertainty. It reminds me of some posters here, who can also spout nonsense with absolutely certainty. But I wonder if the certainty of LLMs come from a sort of business choice, or if it has proven difficult to avoid it.
I think some of it is because they’ve been designed to mimic human behaviour, this forum wouldn’t have ever existed if people didn’t spout nonsense with absolute certainty….. and we know saying “I don’t know” can be very difficult for many people, so much so they will spout absolute ◊◊◊◊◊◊◊◊ rather than simply admit they don’t know something, they will also hallucinate like the LLMs, in other words lie, make stuff up and continue to ◊◊◊◊◊◊◊◊ you even after you catch them out.
 
Otto Lilienthal flew in gliders 2k times a decade before the Wright brothers' first successful powered aircraft. The main obstacles to powered heavier-than-air flight was an engine with sufficient weight-to-power ratio and control surfaces sufficient to control the aircraft. Samuel Langley's aircraft, like many others, was structurally weak and had poor aerodynamics, which are odd failures, considering that he had many examples of how to build a successful aircraft through Lilienthal and others. His engine actually was more powerful per weight than the Wright brothers' engine. In fact, Langley's engine had the best power-per-weight ratio of any engine in the world for the next decade. He just built a crummy aircraft. In hindsight, we see that many people were very close to the solution, but, oddly, didn't make basic adjustments necessary for success. This is a far different situation from AI, in which we are many orders of magnitude from anything like human intelligence.

No. Attempts at human flight way predate Lilienthal and predate the most basic understandings of aerodynamics. The first human flight was the Montogolfier brothers in a balloon which did not imitate the way birds fly.
 
I think some of it is because they’ve been designed to mimic human behaviour, this forum wouldn’t have ever existed if people didn’t spout nonsense with absolute certainty….. and we know saying “I don’t know” can be very difficult for many people, so much so they will spout absolute ◊◊◊◊◊◊◊◊ rather than simply admit they don’t know something, they will also hallucinate like the LLMs, in other words lie, make stuff up and continue to ◊◊◊◊◊◊◊◊ you even after you catch them out.
I would like to enter the thread "Trump's Second Term" as exhibit A.
 
No. Attempts at human flight way predate Lilienthal and predate the most basic understandings of aerodynamics. The first human flight was the Montogolfier brothers in a balloon which did not imitate the way birds fly.

To a degree, this. Evolution stops when it gets to "good enough." As one wag said, "If evolution works, why do mothers have only two hands?"

The thing about humans is we can start with biomimicry, puzzle out the essentials, then build on the essentials to make something better.
 
No. Attempts at human flight way predate Lilienthal and predate the most basic understandings of aerodynamics. The first human flight was the Montogolfier brothers in a balloon which did not imitate the way birds fly.
NONE of the flight approximations used by humans successfully imitate how birds fly. Just remaining in the air is hardly proper flight. We've had centuries of active development and yet our best flight systems barely resemble birds, lacking the efficiency and elegance of proper avian dynamics. Even state of the art simulations of birdflight fail to operate using analogous physical mechanisms. Airplane "wings" don't flap once! They have played us for fools.
 
Using it for what?
Getting answers for whatever questions cross my mind. I've tested Gemini (or Bard, as it was known back then) quite a lot to see how it would respond. I've debated it on its consciousness and thought processes. For the most part, I ask it for information on whatever question I happen to have at the moment. It's much easier for me to get concise, relevant answers from Gemini than from regular Google search, especially for complex searches, though I sometimes revert to any of the Web search engines when I'm unsatisfied with Gemini's replies. I've used it a lot to explain the material in my college classes to me, including asking it what my professor is talking about in his cryptic notes. Amazingly, Gemini has always been able to make sense of what my professor wrote, even when I can't. I have 230 saved conversations with Bard/Gemini on my PC.

I have an important test next week in my Statistics class that I'm hoping Gemini can help me prepare to pass. I've tried using the school's Math Lab and I attend all the classes and looked through the slide decks and homework, but these are inadequate. I need Gemini to help me delve much deeper into the material for much longer than the tutors are going to be able to help me. I also watch YouTube videos, but they haven't helped me much. I understand the material much better by asking Gemini specific questions as necessary.

Sometimes, I use Gemini as a debate opponent. I can test how it responds to my theories. Recently, I argued extensively against it regarding some FAA rules on visual observers shooting photos while conducting visual observation. Arguing on this topic with Gemini is like arguing with a brick wall. On other topics, Gemini is more willing to concede that it is wrong.

And that's fine.

It's creepy and troubling. It's like reading an encyclopedia article about frogs, let's say, only to have it slip in a paragraph claiming that frog stew has been used to prevent drowning. If the information I need is for a serious or important purpose, I need to fact check what Gemini tells me.

Nobody says it's perfect. And humans can also spout nonsense.
The problem isn't so much that the LLMs are not perfect. It's more that they make nonsense sound credible. It's more like gaslighting or conning somebody, only the AI doesn't seem aware of it. Even when I point out its error, and the LLM acknowledges its error, it could immediately give exactly the same reply, and it will do this repeatedly.
 
No the LLMs mimic some human behaviours that we used to think were the preserve of NI, but at the moment there is no sign that they have a human-like “I” (if such a thing even exists in humans). They do challenge what we used to think were signs of human intelligence.
By statistically sampling human-generated products and returning the statistically likely replies that a human would give, at least in the case of LLMs. They don't come up with their replies on their own. They don't even understand what they are sending back.
 
No. Attempts at human flight way predate Lilienthal and predate the most basic understandings of aerodynamics. The first human flight was the Montogolfier brothers in a balloon which did not imitate the way birds fly.
I was limiting my examples to heavier-than-air flight. I intentionally did not mention lighter-than-air flight.

If balloons don't imitate how birds fly, that is because they use a different principle than birds use to fly. Nobody said that the way birds fly is the only way of achieving flight or buoyancy in a fluid medium. In fact, the inspiration for balloon flight came to the Montogolfiers as they watched sparks flying up in their fireplace. Their balloon was meant to imitate sparks. Balloons fly in much the same way that wood or balls float in water. A balloon is like a cork in the sky. (I just know that analogy is going to be well-received.)
 
Last edited:
NONE of the flight approximations used by humans successfully imitate how birds fly.
Ornithopters are a thing.
Just remaining in the air is hardly proper flight.
It gets us from A to B.
We've had centuries of active development and yet our best flight systems barely resemble birds,
Two wings and a tail. That seems close enough.
lacking the efficiency and elegance of proper avian dynamics.
There is always room for refinement. That doesn't mean we don't fly.
Even state of the art simulations of birdflight fail to operate using analogous physical mechanisms. Airplane "wings" don't flap once! They have played us for fools.
Ornithopters are a thing. Some birds spend most of their flight time gliding, though. Flapping is just their choice of powered phase.

 
We? What happened to the cork floating in the air? On what planet would a molecule of cork weight less than a molecule of air?
Corks can be made of many things, not just from the bark of the cork tree. A cork floats in water because it is less dense than the water. A balloon floats in the air because it is less dense than air. Aerographene reportedly is less dense than air, too, but the air trapped in its pores keeps it from floating in the air.
 
Sure, the differences are obvious. But there are also not so obvious similarities. The model itself does not change. But the prompt does. It's similar to something like a short term memory. And it can have hundreds of thousands of tokens in current models. Also the model itself can change, that's the fine tuning phase .. it's more computation intense than inferencing, but it's possible for the model to keep learning. LLMs don't do that because again, it's not wanted in chatbot application. LLMs can also play computer games .. with varying success. But it's simply the game which provides the input, and the model reacts .. and then there is special code which interprets the output and applies it on the game. Which is in essence the looped model.
Who knows how well us humans would "quickly become gibberish" if we were cut from all input ..
Isn't that exactly what is wanted in a chatbot application? I.e. the same set of prompts leading to a potentially different response. The ability to point out errors and then have the AI not make them again when given exactly the same prompt. Perhaps one issue is that it might be too easy to get an AI to learn 1 + 1 = 3.
 
Corks can be made of many things, not just from the bark of the cork tree. A cork floats in water because it is less dense than the water. A balloon floats in the air because it is less dense than air. Aerographene reportedly is less dense than air, too, but the air trapped in its pores keeps it from floating in the air.
Under what conditions would you expect common cork to float in the air, un-aided by some other mechanism?

Instead of admitting your analogy wasn't all that good, you seem to want to continue arguing and lecturing me. Try to be a little more mature sounding.
 
Under what conditions would you expect common cork to float in the air, un-aided by some other mechanism?
High wind.
Instead of admitting your analogy wasn't all that good, you seem to want to continue arguing and lecturing me. Try to be a little more mature sounding.
I've had more intelligent conversations with LLMs. Just for fun, I gave Google Gemini the prompt, "A balloon is like a cork in the sky." Here is Gemini's response:

That's a really interesting way to put it!

You're highlighting the idea of **buoyancy**. Just as a **cork** is less dense than water and is pushed up to the surface, a **balloon** filled with a less dense gas (like helium or hot air) is less dense than the surrounding air and is pushed up into the sky.

It's a great analogy for explaining how things float!

What made you think of that comparison? Are you learning about physics or just observing the sky?

My next prompt to Gemini: "I'm arguing with people on a forum as to whether it is necessary to replicate the function of the human brain to get human reasoning. Somebody pointed out that balloons don't fly like birds do, to which I replied that balloons use a different physical mechanism. Balloons aren't simulating birds; the Montogolfier brothers were trying to imitate sparks from a fire in a fireplace. This led to my statement, "Balloons fly in much the same way that wood or balls float in water. A balloon is like a cork in the sky." Now, somebody on the forum says that I made a horrible analogy, because corks can't float in air."

Gemini's reply: "The person arguing that a cork can't float in air is missing the point of an **analogy** and the underlying **physical principle** you're illustrating. Your analogy is **not** about the objects themselves (balloon vs. cork), but about the **mechanism of floating** in a fluid.

Your analogy is actually quite good and scientifically accurate in its core concept.

## 🎯 Why Your Analogy Works: Buoyancy

Your statement, "A balloon is like a cork in the sky," is a perfect illustration of **buoyancy**.

1. **Cork in Water:** A cork floats in water because the cork's **density** is less than the water's density. The buoyant force (the upward force exerted by the fluid) is greater than the gravitational force (weight) of the cork, causing it to rise to the surface.
* *Fluid:* Water
* *Floating Object:* Cork
* *Mechanism:* Object is **less dense** than the fluid.

2. **Balloon in Air:** A balloon (filled with a gas like helium or hot air) floats in the sky because the balloon's **average density** is less than the air's density. The buoyant force exerted by the air is greater than the gravitational force of the balloon system (including the gas and the envelope), causing it to rise.
* *Fluid:* Air
* *Floating Object:* Balloon
* *Mechanism:* Object is **less dense** than the fluid.

The **analogy** is: **floating in a fluid due to lower density is the same fundamental principle whether the fluid is water and the object is a cork, or the fluid is air and the object is a balloon.**

## ❌ Why the Counter-Argument Fails

The counter-argument, "corks can't float in air," is a **literal misinterpretation** of a conceptual analogy.

* They are incorrectly comparing the physical *properties* of the two objects (**cork** and **balloon**) rather than the physical *mechanism* (**buoyancy**).
* Of course, a physical cork is too dense to float in air. If you could, however, build a large **cork-balloon** with an average density lower than air, *it would float*. The material isn't the point; the **relative density** to the surrounding fluid is.

In the context of your original debate about AI, your point stands:

| **Debate Topic** | **Analogy** |
| :--- | :--- |
| **Human Reasoning** | **Bird Flying** |
| **AI Reasoning** | **Balloon Flying** |
| **Core Message** | You don't need to *simulate* the bird's complex biological mechanism (brain) to achieve the *function* of flight (reasoning); a different physical mechanism (buoyancy) can achieve the same goal. |

The balloon/cork analogy strongly supports your point that a different, non-biological mechanism (like a computer) can achieve the function of human reasoning, just as a balloon achieves flight without simulating a bird's wings.

[Image of Hot Air Balloon Floating]
 
Last edited:
Isn't that exactly what is wanted in a chatbot application? I.e. the same set of prompts leading to a potentially different response. The ability to point out errors and then have the AI not make them again when given exactly the same prompt. Perhaps one issue is that it might be too easy to get an AI to learn 1 + 1 = 3.
That's the issue. Anyone could attack the LLM and feed it nonsense. The training and fine-tuning is (more or less) redacted and controlled. And you can't store version of the model for each user, as the models has few terabytes.
You can fine-tune your local model, and you can also apply some per-user patches called Lora .. which is basically just a small portion of model being changed while training or fine-tuning, so you can afford to do it per user. For example Sora 2 allows you to create Lora with your own likeness, and then you can use yourself in the video. I did some Lora training on Stable Diffusion and I was able to teach it new concepts (I was doing Atari 800) with 10 images and like 5 minutes of training. So custom fine-tuning can be done, and it's not as compute hungry as primary training. But with LLMs, the huge prompts are usually enough, and it does not require to store hundreds of megabytes or more per user.
 

Back
Top Bottom