• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Well, most of those who believe that brains have a special magical spark that makes them intelligent, also believe that only human brains contain this spark.
It may not be the human brain containing a magic spark, setting aside the statements of science that human brains contain genes that make them anatomically, structurally and functionally different from any other creature. The tendency in my background is to regard the human body as a conduit for the spirit, and the brain as a conduit for the mind. Our bodies may be nothing special, but they aren't the full definition of who we are, either.
 
I'm not a Materialist. I don't believe that all reality can be described by the laws of physics. I believe in the supernatural. What I don't know or have a particular belief in, is where the physical ends and the spiritual begins. I don't know where brain and mind diverge, if at all. I do believe that our lives and our ultimate consciousness transcends the physical universe.
See, if you'd started with "I believe in God and that human souls are divinely endowed with the unique capability for reason and soulless machines like computers will never possess that capability" then I wouldn't have had anything to argue with you about. "I have faith that it is so" is a perfect position that cannot be gainsaid.

But when you try to use reason and logic to support that position, you're entering our territory.
 
I believe in God and that He created all things, and human intelligence is a rare instance in all of creation. Whether that makes achieving it impossible for mere mortals, I don't know. I do know that we aren't even close, and won't ever be close.

Ah ok.


eta: But enjoyed the discussion that followed, absolutely.(y) Including the aeon article linked to, which like others pointed out was interesting but very limited, oddly so given your high praise for it that now looks like having followed from his conclusion rather than his reasons for getting there.

You do realize you contradict yourself in that short portion quoted above, don't you? That's what happens when you start a priori with Santa, and then try to forcefit his fat ass down whatever.

I don't know if there's a thread on NI, you can start one if you like. You seem well informed, might make for interesting reading. You can also visit R&P if you like, to talk about why your a priori assumptions about supernatural nonsense.

Joined recently, I see. Welcome, you"re a cool addition to our little gathering.
 
Last edited:
See, if you'd started with "I believe in God and that human souls are divinely endowed with the unique capability for reason and soulless machines like computers will never possess that capability" then I wouldn't have had anything to argue with you about. "I have faith that it is so" is a perfect position that cannot be gainsaid.

But when you try to use reason and logic to support that position, you're entering our territory.
I've mentioned it on this thread a few days earlier. However, the reason that I say that no AI will ever have human intelligence is not just based on my religious views. It's based on the fact that humans have never perfectly replicated any human body part, much less the most complex structure in the Universe. I'm not a fan of these fantasy future speculations that technology is going to solve everything. Claiming that it is in principle possible is nothing more than speculation. You don't have any empirical evidence for it. The Null Hypothesis, the status quo, is that such a thing does not and cannot exist.
 
A question about the very off-putting obsequiousness of LLMs: is this in any way necessary, or is it just something put in to flatter the massive egos of investors and CEOs ?
I assume that it is for the same reason that any company representative is required to be polite and courteous to customers and the general public. It's for good customer relations and PR. AI companies have other battles to fight than having people complaining about rude AI.
 
I've mentioned it on this thread a few days earlier.
I noticed. But you chose not to make that your argument. If you had, there would be nothing more to discuss.

However, the reason that I say that no AI will ever have human intelligence is not just based on my religious views.
Really.

It's based on the fact that humans have never perfectly replicated any human body part, much less the most complex structure in the Universe.
It's never been done, therefore it can never be done? Come on.

I'm not a fan of these fantasy future speculations that technology is going to solve everything. Claiming that it is in principle possible is nothing more than speculation. You don't have any empirical evidence for it. The Null Hypothesis, the status quo, is that such a thing does not and cannot exist.
Does not, I'll grant you. Can not, I won't. For "can not", something has to actively prevent it. You have not demonstrated what that something is.
 
On the obsequious issue, I assumed the interface was specified by the same sort of people who set KPIs for call centres like the one a friend worked in. He was consistently rated top by the test callers who said he was most helpful, best at resolving problems etc but was lowest rated by his company as he failed to e.g. use the callers name often enough "Yes, Alan, of course, Alan, I am certain I can help you Alan. No, Alan, I know nothing and can do nothing, Alan. Is there something else I can help you with Alan?".
 
It's never been done, therefore it can never be done? Come on.
The likelihood of something happening after consecutive failed attempts becomes vanishingly small as the number of attempts increases. I've always been a pessimist. I'm willing to concede the possibility of success if I can see some evidence of progress. After seventy years of efforts, we've gotten to a statistical parrot of human efforts.
 
I have no issues with gut based opinions. It's pretty common even amongst the AI experts. There very few things we know for sure about AI.
I'm way more optimistic about AI though. But that means I'm way more pessimistic about the future of human race. Does it make me overall optimist or pessimist ? :unsure:
 
The likelihood of something happening after consecutive failed attempts becomes vanishingly small as the number of attempts increases. I've always been a pessimist. I'm willing to concede the possibility of success if I can see some evidence of progress. After seventy years of efforts, we've gotten to a statistical parrot of human efforts.
Nobody has attempted to completely simulate an entire human brain before.

But again, this is irrelevant because you haven't proven - or even demonstrated - that the only way to synthesise a mind is to simulate a biological brain.
 
I have no issues with gut based opinions. It's pretty common even amongst the AI experts. There very few things we know for sure about AI.
I'm way more optimistic about AI though. But that means I'm way more pessimistic about the future of human race. Does it make me overall optimist or pessimist ? :unsure:
I'm pessimistic about the future of the human race, but not because of AI!
 
Similar to "junk" DNA, glia cells were thought to be little more than "glue" for neurons. Now it looks like they too are involved in information processing. The model of the neuron currently used in AI may be significantly less powerful than neuron + glia in biological brains.

Current LLMs have abandoned recurrent networks for transformers because they are much easier to train. This is very different to biological brains, which make use of feedback all over the place. For example, 80% of the inputs to visual cortex come from the rest of the brain, making visual perception more like controlled hallucination.

In linear signal processing that I am far more familiar with, recurrent networks are called Infinite Impulse Response (IIR) filters and non-recurrent networks Finite Impulse Response (FIR) filters. The output of a FIR filter will eventually decay to zero after the input has been removed. The output of an IIR filter may persist forever or even grow after the input has been removed. Generally IIR filters can achieve more than FIR filters given a fixed amount of processing hardware, though their are lots of optimisations that can be easily applied to FIR filters that are not applicable to or much harder for IIR filters.

My speculation is that AI will need recurrent networks if it's going to get to anything like NI. AI at the moment just sits there doing nothing if not prompted. NI generates its own prompts.
 
The feedback is there though .. the output of the network is part of the prompt when you ask again. Oh wait, even more than that. LLMs generate token at a time .. so when they are outputting second token they also react on their first token, etc.
So LLM also generate its own prompt. They only react when prompted, sure .. but that's artificial limitation. Obviously you can simply loop the output. LLMs also have to be trained to stop the response at some time (to emit special stop token). The totally can blab forever.
 
Last edited:
The likelihood of something happening after consecutive failed attempts becomes vanishingly small as the number of attempts increases. I've always been a pessimist. I'm willing to concede the possibility of success if I can see some evidence of progress. After seventy years of efforts, we've gotten to a statistical parrot of human efforts.
I don't think this is a fair assessment. The computational theory, computational resources, and a sufficiently large corpus only came together in the last twenty years or so. After about two decades of efforts (building, of course, on previous work), we've already got a pretty good statistical parrot.
 

Back
Top Bottom