• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Cognitive Theory, ongoing progress

A Turing Machine is an abstract construct used in computational theory. A real, working version of this is necessarily finite. However, instead of being called a Finite Turing Machine it goes by the less intuitive name of Linear Bounded Automation. Any working computer we have today should be an LBA.
The first of those sentences is true, but what follows is balderdash.

Turing machines start out finite and remain finite at every finite stage of computation. They are potentially infinite in that there is no bound on the size of their tape (memory). To build a real, working Turing machine, you build a machine with a finite tape that is dynamically extensible: Whenever the TM gets to the end of the tape, you add more tape, pretty much as you would add another memory device to your computer system or would replace one of your system's memory devices by a similar device with more capacity, copying the replaced device's contents onto the new one as you do so.

barehl tells us a Linear Bounded Automaton is finite, but the only bound on the size of a Linear Bounded Automaton's tape is derived from the size of its input. To build a real, working Linear Bounded Automaton, you have to build a machine with an arbitrarily long finite tape. You'd do that the same way you'd build a Turing Machine, but it's slightly simpler because you only have to add the additional tape (memory) once, when you are given a concrete input and can compute the amount of memory needed from the input's size.

It is silly to say, as barehl did, that today's working computers are Linear Bounded Automata rather than Turing machines. You can add new or larger memory devices to consumer-grade computers by plugging in a USB thumb drive. It is also possible to build computers that allow other kinds of memory devices to be added dynamically without shutting down the computer. The ability to do that is necessary if you're building a Linear Bounded Automaton, and once you've done it you've done all of the engineering necessary to construct a working Turing machine.

In the real world, the real reason real computers aren't equivalent to Turing machines is that their ability to address memory devices, even large hard drives, is typically limited by a fixed maximum number of bits used to identify the location/cell/sector/word/whatever you want to access on a memory device, and by the fixed number of bits used to identify the particular memory device you want to access. Both of those technical limitations could be overcome quite easily, but it's cheaper and faster to use a fixed number of bits that's believed to exceed the number of memory devices and the device capacities that will actually be used during the anticipated lifetime of the computer system.

A cognitive theory that's based on fundamental misconceptions about Turing machines and Linear Bounded Automata is unlikely to add to our knowledge of cognition or intelligence.
 
This could mean that animals have been more dependent on environmental entropy than I suspected. This would also suggest that animal consciousness would be more constrained. However humans don't seem to have this severe reliance on environmental entropy. Why? The next question is if this is seen with great apes which are closer in brain structure to humans. Since I'm not very knowledgeable about this, I need more information from people who are.

You draw the oddest conclusions from your examples.

There is a huge difference between a complex environment and a random environment. Meeting the challenge of understanding a complex environment has evolutionary advantages. But there is no point in trying to understand a truly random environment.

On the other hand there are algorithms, such as simulated annealing, that rely on randomness to help solve a complex problem. This has relevance to problem solving using neural networks.

I believe these to be two separate and distinct issues. Or perhaps I am totally missing the point.
 
Last edited:
What is the textbook definition of "causality in Turing derivative devices"

A Turing Machine is an abstract construct used in computational theory....
You need to assume that posters in a thread about AI know the basics of AI, e.g. what a Turing machine is :eye-poppi!

And actually reply to a post. I will make it clearer:
What is the textbook definition of "causality in Turing derivative devices".
The definition of causality in any Turing machine might be that the state of the machine + its input causes a change in state. That may include a symbol on the tape that means "generate a random number and go to the state with that number". However I suspect that would be more a "subprogram" on the tape, e.g. a set of symbols that make up a random number generator, etc.

List sources other than your imagination that relate "causality in Turing derivative devices" to living organisms, e.g. zoo animals. Otherwise we just have what looks like a fantasy that abnormal behaviors (e.g. repetition) in zoo animals is related to an vague or even nonexistent definition.
 
Last edited:
The first of those sentences is true, but what follows is balderdash.

Turing machines start out finite and remain finite at every finite stage of computation. They are potentially infinite in that there is no bound on the size of their tape (memory). To build a real, working Turing machine, you build a machine with a finite tape that is dynamically extensible: Whenever the TM gets to the end of the tape, you add more tape, pretty much as you would add another memory device to your computer system or would replace one of your system's memory devices by a similar device with more capacity, copying the replaced device's contents onto the new one as you do so.

There is a limited amount of matter and energy in the universe that is within our cosmic horizon. Nothing you build in this universe is potentially infinite. Everything you can build can be reduced to a very large finite state machine.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.

Please don't compare other members to ProgrammingGodJordan.
 
Hierarchical topographic maps. The binding problem was only a problem in the context of the hypothetical computer architecture doing the binding, which had to pick between global concepts devoid of spatial context, or localized information with no unifying architecture. Turns out nested topologies can translate between the two just fine.

If you're familiar with deep learning, it operates on a similar principle.

Let's check this.

The neural binding problem(s) published in Cognitive Neurodynamics.

Abstract:
The famous Neural Binding Problem (NBP) comprises at least four distinct problems with different computational and neural requirements. This review discusses the current state of work on General Coordination, Visual Feature-Binding, Variable Binding, and the Subjective Unity of Perception. There is significant continuing progress, partially masked by confusing the different versions of the NBP.​
Introduction:
In Science, something is called “a problem” when there is no plausible model for its substrate.​
Here's Beelz' reference:

The brain’s organizing principle is topographic feature maps (Kaas 1997) and in the visual system these maps are primarily spatial (Lennie 1998).​
This is the important point about visual feature binding:

Another salient fact is that the visual system can perform some complex recognition rapidly enough to preclude anything but a strict feed-forward computation. There are now detailed computational models (Serre et al. 2007) that learn to solve difficult vision tasks and are consistent with much that is known about the hierarchical nature of the human visual system. The ventral (“what”) pathway contains neurons of increasing stimulus complexity and concomitantly larger receptive fields and the models do as well.​
I agree. Neural networks have made a lot of progress in this area, at least for very, very specific applications. So, it's solved, right? No.

Fortunately, quite a lot is known about Visual Feature-Binding, the simplest form of the NBP.​
We've made progress on two of these:

Suggesting plausible neural networks for General Considerations on Coordination and for Visual Feature-Binding is no longer considered a “problem” in the sense of a mystery.​
But not the other two:

Neural realization of variable binding is completely unsolved

Today there is no system or even any theory of a system that can understand language the way humans do.

We will now address the deepest and most interesting variant of the NBP, the phenomenal unity of perception. There are intractable problems in all branches of science; for Neuroscience a major one is the mystery of subjective personal experience.

What we do know is that there is no place in the brain where there could be a direct neural encoding of the illusory detailed scene (Kaas and Collins 2003). That is, enough is known about the structure and function of the visual system to rule out any detailed neural representation that embodies the subjective experience. So, this version of the NBP really is a scientific mystery at this time.​
Experience is still a mystery.
 
Again, that isn't what I said and you know it. Taking a quote out of the context of the paragraph which explains what I was saying in great detail doesn't help your claim.

If it is obvious that who is president does not affect the description or publishing of science then why do you think
I have not made a claim that Trump would block or interfere with publication. Why do you keep pretending that I did? I've already explained what my concerns were. The fact that you keep ignoring what I said and then making up new things to attribute to me doesn't help you at all in a discussion with me. Or are you doing this as a performance for other people here?

Why don't you try sticking to ideas that are actually mine?
 
I'm referring to the input stream of random numbers you are feeding your computational device in order to make probabilistic decisions.
I said some time ago that I don't believe that consciousness can be explained using only computational theory. Can I prove that yet? No. But that is the direction that I'm working. I may find out that computational theory is adequate after all. Again, all I can tell you is that I was unable to make progress using only computational theory. We'll see.

Oh, then if a pseudo random number generator of sufficient quality is fine, then your probabilistic Turing machine just reduces to a normal one that incorporates a prng of sufficient quality.
Well, yes, no disagreement there.
 
It is silly to say, as barehl did, that today's working computers are Linear Bounded Automata rather than Turing machines.

In the real world, the real reason real computers aren't equivalent to Turing machines is that their ability to address memory devices, even large hard drives, is typically limited by a fixed maximum number of bits used to identify the location/cell/sector/word/whatever you want to access on a memory device, and by the fixed number of bits used to identify the particular memory device you want to access.

Well, that clears it up. Apparently if I say it then it's silly but if someone else like Clinger says the same thing then it's not silly. Thank you.
 
There is a huge difference between a complex environment and a random environment. Meeting the challenge of understanding a complex environment has evolutionary advantages. But there is no point in trying to understand a truly random environment.

I'm not sure what a truly random environment would be. Wouldn't you need to have variable or changing laws of physics for that? If you are a frog, can you predict when an insect that could make a good meal might happen by? I don't see how you could. That would seem to be a random event.
 
There is a limited amount of matter and energy in the universe that is within our cosmic horizon. Nothing you build in this universe is potentially infinite. Everything you can build can be reduced to a very large finite state machine.

That is true for any specific problem. But it isn't really true for general problems. You would end up needing to have collections of finite state machines and then additional FSMs to decide which one to use. You quickly get into intractability where the size and complexity of the FSM grows much faster than the complexity of the problem. So, as far as I can tell, theoretically true but not practical.
 
I believe we have encountered a fellow traveler of the late and un-lamented member ProgrammingGodJordan with a better grasp of the English language but the same delusional approach to science and research.
You can talk to me directly. I don't resort to feelings and intuition and mysterious forces to explain things. You seem to think that I rely on vagueness or some kind of semantic arguments. Vagueness is what I'm trying to get rid of and terms are only useful if they can be robustly defined. If you know of some evidence that refutes my ideas or if I find evidence elsewhere then I'll have to modify or abandon my ideas. That's what science is.
 
Just for fun I remapped this to my scenario where people think I am gay.

I've been treated differently since I was six years old. I was treated differently all through grade school, high school, and even in college. I've gotten this from family, friends, co-workers, employers, acquaintances, and mental health professionals. This has gone on for a number of decades. So, how many possibilities are there?

1.) We have a case of mass delusion where people who come into contact with me mistakenly think I'm gay when I'm actually not. Since this involves people who have never met it would have to include some kind of telepathy.
2.) At the age of six I cannot help fooling people into thinking I was gay. And apparently got good enough at it to fool people who actually were smart and knowledgeable.
3.) The conclusions by others about me have been consistent because they were based on observations.

Thing is, I am not gay. Effeminite? perhaps. Overly mannered in the way I carry myself? Sure. Attracted to men. Nope. Just aint there.

ok, back on topic.
 
Update:

The latest thing I've been working on is pronoun reversal in people with autism. I found this reference particularly good since the writer has Aspergers.

https://musingsofanaspie.com/2014/03/14/pronoun-reversal-and-confusion/


Before someone else tells me that my research doesn't relate to my own research (that still makes me laugh) this involves self-perception and language usage.

When you say you're working on pronoun reversal, what form does your work take?
 
Just for fun I remapped this to my scenario where people think I am gay.

You changed what I said. The original post was:

2.) At the age of six I learned to fool people into thinking I was smart. And apparently got good enough at it to fool people who actually were smart and knowledgeable.

So if you were going to substitute then you would need to change both:

2.) At the age of six I learned to fool people into thinking I was gay. And apparently got good enough at it to fool people who actually were gay and knowledgeable.

I'm not sure how that would work. Children I grew up with were not sexually expressive until at least age 10. For example, I used to hold hands with my girlfriend in kindergarten but I didn't have any concept of sexuality then.
 
When you say you're working on pronoun reversal, what form does your work take?

You find out what information is available that describes this phenomenon. Then you see how this fits into cognitive theory. I think a complete theory should be able to cover topics like this. Items like this could either support or falsify a given theory. How would you fit this into something like Integrated Information Theory?
 
You changed what I said. The original post was:

2.) At the age of six I learned to fool people into thinking I was smart. And apparently got good enough at it to fool people who actually were smart and knowledgeable.

So if you were going to substitute then you would need to change both:

2.) At the age of six I learned to fool people into thinking I was gay. And apparently got good enough at it to fool people who actually were gay and knowledgeable.

I'm not sure how that would work. Children I grew up with were not sexually expressive until at least age 10. For example, I used to hold hands with my girlfriend in kindergarten but I didn't have any concept of sexuality then.
Whoosh!
 

Back
Top Bottom