• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Hammering out cognitive theory

you waffle (e.g. igLearning / physLearning vs what children do, unspecified) and obfuscate
You use pejorative terms and are making conclusions about my motives and character based on nothing.

Your discursive process annoys me. My emotion is frustration and annoyance and impatience, which is what I experience with trolls, obfuscators and time-wasters.
A few years ago I was trying to write fiction with someone else. She told me that she could only work on one story at a time or she would get the characters confused. I was working on about two dozen stories so that wasn't an issue for me. I can't really imagine getting the characters confused although I can understand that someone else could.

I don't know ahead of time what might be useful so I cover a lot of ground in different areas. Meandering from topic to topic is how I think. It isn't confusing to me but I can understand that it might be to someone else. For example, I watched J. P. Moreland - Loving God With All Your Mind . This is a lecture on apologetics. If you go to 26:28 he starts talking about world views and types of knowledge. This is standard philosophy. His specific framework for knowledge wasn't useful to me. However, I needed something that went beyond Shannon's use of information. Many of the ideas in philosophy are not directly useful to me because they require too many assumptions before you can start (presupposition). On the other hand, computational theory doesn't really include these types of concepts. So, you borrow an idea from philosophy and then approach it with the same structure that you would use in computational theory or math. Likewise, behavioral science is lacking the foundation that would explain what is going on, but it has mountains of evidence. So, if you create an idea about cognition the obvious way to see if it is ludicrous or if it fits somehow is to try to match it with evidence from behavioral science.

now that I know how special you've been, be feeling that that is slipping recently and so turned to this.
Recently being November, 2013?

bare headlines. (Ah, I'm on to you - that's what "barehl" stands for, isn't it?) :biggrin:
It's possible. I would have to ask my mother though since I didn't participate in choosing my name when I was born.

I am sorry you felt attacked. I suppose that's not a bad description. As I said, I find your discursive style very annoying.
Yet, what is interesting is that while trying to explain this to you I realized why emotions work better than fixed machine states. Thank you. I guess I should write that down.

I sense that you're stuck trying to bridge two beliefs - that the brain is just matter and its activity is computable, and that consciousness is somehow special and not computable.
No, that's quite wrong. The brain is just matter and the mind or consciousness is entirely a function of the brain. The second part you are talking about is whether or not cognition can be duplicated using only computational theory or whether it would require something else. I'm still quite certain that cognition is possible with a machine even if it isn't a Turing/Von Neumann machine.

I'm not sure if you're genuinely trying to get help puzzling this out or not. I know the subject matter is way beyond me, but I'm pretty good at gleaning the gist of a conversation and I have a little background since I've been programming for the last 25 years and have an interest in cognition and biology. You seem hell bent on not being corrected on anything, even by people with Masters degrees who dissect your own logic and show it to be sorely wanting.
Again, this isn't correct. Many of the characterizations of my ideas aren't even close to my ideas. So, although those ideas are wanting, they aren't mine. I've even had people repeat the same thing that I've said and then give that as a reason why they disagree with me. You are also lumping into that group the often stated conclusion that an idea must be wrong because I haven't proven it even if there is no proven alternative. I would ask you to quote or link some of these but we both know that you can't.

Strangest of all, despite apparently arguing that AI can't be conscious, you also seem to suggest that this is exactly what you've cracked, saying that you can build a "thinking" machine.
This is about as good of a mangled statement as I have seen. You are confusing different things. Artificial Intelligence theory has been quite productive in what it has been doing since 1956. It has made a number of advances. The Computational Theory of Mind suggests that in time this will develop into something indistinguishable from consciousness. This is also called General AI. In other words, if this idea is correct then a General AI would be capable of acting and reasoning just like a human. I'm saying that this idea is incorrect, that no matter what you do with AI, it will never reason like a human or be indistinguishable from a human. I'm saying that you need an alternative theory and some differences in hardware from what you have with a Turing/Von Neumann machine. In other words, I'm saying that no existing computer based system is capable of becoming conscious or acting in a conscious manner. However, you could build a machine that would do this using different hardware and different foundational theories.

I don't think there's anything wrong with working on something in secret, nor discussing it in reasonable enough detail and responding cogently to challenges. You're not doing either.
Well, if I'm not working on it in secret then why aren't you aware of what I'm working on? Secondly, I tried discussing it in more detail in the Foundations thread and no one cared about that. Clearly, neither claim is accurate.

You are, of course, covering a lot of ground. There isn't any lack of detail in the areas related to machine cognition that the threads cover. But it's like you drop some bit of theory into the conversation and then spend another page running around trying to avoid people telling you you don't understand it properly.
Well, again, I could ask for you to quote this but once again we both know that you can't. This is little more than yet another baseless conclusion.

The reality is that there are vast differences in assumptions between me and other posters on this board, meaning the posters who actually contribute. If General AI and the Computational Theory of Mind is correct then presumably you could build a gigantic machine out wood and pegs, using water wheels to give it energy. In other words, a machine using no electrical or electronic parts. Now, I doubt this idea is at all unusual to Russ or Clinger. I assume that they would accept that this type of machine would be possible. But the idea bothered me; I don't think the same way they do. But, a feeling isn't worth much; you actually have to prove it. So, I first started working on availability. This relates to how reliable a machine is versus its complexity. It occurred to me that by the time a machine of this sort was complex enough to do work it might be so unreliable that you couldn't keep it running. That seemed like a possible answer. However, I still wasn't satisfied. But, eventually I came up with the temporal boundary equations. Those indicated that such a machine would only work in an open environment if it was fast enough. That made it impractical to build. Now, Russ wouldn't care about that because he assumes that you can always simulate an environment slow enough to stay within the boundaries. That is true. However, I still liked the proof because it refuted some concepts in science fiction that had been around for awhile. I worked on refuting other concepts like building an intelligent organism out of bacteria or getting nanites to work. Again, I doubt Russ cares much about these ideas but refuting them was important to me.

Towards the end of this, last August, after great criticism of your views and not a little "attack",
Let's be honest. Beelz was attacking my character and doing it just as you have been without any reason or evidence. Both of you get irritated and so you start making up things about me being a troll or whatever. Maybe it makes you feel better, but it is still childish and it definitely degrades my respect for you. This is not difficult to understand and I'm sure that both you and Beelz do understand it regardless of how you act. I could easily be wrong about my ideas. The theories I've worked on could contain fatal errors or omissions that make them useless. However, being wrong does not make me a troll or whatever pejorative-of-the-day you or Beelz feel like tossing out.

you said, "If it gets published then maybe we can have a productive conversation."

But you want the missing step from me months later.
You left out a few things. That post where I was talking about publication was August 17. I wanted to get some perspective about publishing my ideas so I started the Perspective thread on August 22. However, no one wanted to talk about that. The only thing they actually wanted to talk about was their certainty that I was wrong and many of them were quite rude. So, on October 1, I started the Foundations of Cognition thread to try to explain things at a more fundamental level. However, people were so rude in that thread that there was no point. I just gave up on it.

Your claim that I don't listen to anyone is patently absurd. When Nonpareil expressed confidence in Integrated Information Theory, I spent time studying it until I understood why it didn't work. If I was arrogant then why didn't I just assume that it was wrong and move on? If I was suffering from Dunning Kruger then how could I even understand it? I had never thought about needing a foundational theory. But, after comments from Russ and Clinger I came up with knowledge theory. That greatly expanded subject. Lately I've been studying neural networks to see what their learning limitations are. I still post because I'm still trying to figure things out. I have no idea why that would be confused with trolling.

Now, as to your question. Okay, let's say I publish a book. The ideas in it are completely wrong and it gets shot down within a matter of days. Why would it be discussed here? Presumably people would just congratulate themselves that they were right and, with their ego intact, they would move on. I suppose there is some very slight chance that not every idea would be useless and someone here might want to develop some of the ideas. But, that does seem unlikely. It would seem more likely that collaboration would come from someone not on this board.

Now, let's say that I publish a book and the ideas are right. I wouldn't think the people here who have been attacking me would want to discuss it. Presumably they would want to just pretend it didn't exist and move on. Who else would want to? Most here have no idea what I've been talking about and probably find the topic boring. They would probably be more interested in practical applications like if they could now get a robotic housekeeper or something. There might be academic interest; I assume if I were right then there would be. However, that would probably be at a conference of some kind. Some might buy a book and want to discuss it but I don't know how much spare time I would have. And, the material is complicated. Explaining it would probably be a two semester course in college.

So, that's why I've asked what step 2 would be. Perhaps I'm overlooking something.
 
Okay. https://en.wikipedia.org/wiki/Motor_cortex

The motor cortex is the region of the cerebral cortex involved in the planning, control, and execution of voluntary movements.

This agrees with what I've said. I can't imagine where else voluntary movements would originate.

https://en.wikipedia.org/wiki/Cerebellum

The cerebellum does not initiate movement, but it contributes to coordination, precision, and accurate timing. It receives input from sensory systems of the spinal cord and from other parts of the brain, and integrates these inputs to fine-tune motor activity. Cerebellar damage produces disorders in fine movement, equilibrium, posture, and motor learning.

This also matches what I've been saying about learning physical skills. Are you saying something different from this?

Further down:

These animals most likely had a somatomotor cortex, where somatosensory information and motor information were processed in the same cortical region. This allowed for the acquisition of only simple motor skills, such as quadrupedal locomotion and striking of predators or prey. Placental mammals evolved a discrete motor cortex about 100 mya.

http://www.ncbi.nlm.nih.gov/pubmed/22393252

These findings demonstrate that the motor cortex and corticospinal tract contribute directly to the muscle activity observed in steady-state treadmill walking.
 
I'm still quite certain that cognition is possible with a machine even if it isn't a Turing/Von Neumann machine.

These two statements are contradictory. I thought we covered this. In detail. Repeatedly. Under the currently known laws of physics, it is not possible to build anything that cannot be simulated with a Turing machine or a Von Neumann machine. You are claiming that the brain works according to laws of physics and even mathematics/logic that are unknown.

I'm saying that this idea is incorrect, that no matter what you do with AI, it will never reason like a human or be indistinguishable from a human. I'm saying that you need an alternative theory and some differences in hardware from what you have with a Turing/Von Neumann machine. In other words, I'm saying that no existing computer based system is capable of becoming conscious or acting in a conscious manner. However, you could build a machine that would do this using different hardware and different foundational theories.

There are a number of people who would like a word with you if you have discovered a way to compute things not possible on Turing machines or Von Neumann machines. Stop bothering working on the whole cognition thing, take a break and provide piece of hardware that solves the one of the undecidability problems.

Now, Russ wouldn't care about that because he assumes that you can always simulate an environment slow enough to stay within the boundaries.

Not only that, but humans regularly exist in a state where they are cut off from environmental input, but still engage in cognitive activities.
 
An organism that is environmentally reactive has a specific reaction for each environmental stimulus. As you increase cognitive ability this is no longer the case.

Here's another problem where it depends on how you define it. Plenty of lower organisms respond in random ways to environmental stimulus. Additionally, as soon as you have any form of memory, even with a only few neurons, the organism can react based in ways that depend on past stimulus, not merely current environmental stimulus. So a sea sponge has developed a "loss of behavioral correspondence". Heck, even certain protozoa utilize proteins to modify their response to stimulus based on past experience.

That doesn't surprise me. If you look at the Drake equation:

[Snipped a bunch of information that does not answer "How you've come about information on identity paradox on Homo Heidelbergensis, Neanderthals, and Denisova" (what research papers, etc). And "the definition of the words civine, and civinity."]

This doesn't make any sense. Evolution didn't plan to make jawed fishes. The bones used later in the jaw were originally used for something else. Evolution could not plan to make cognition. I'm saying that cognition was an adaptation using what was available. I would agree that it was a cheaper path in terms of what was available. But you always have that question of whether there were other options. Obviously Sapiens have no other options but other species on other planets might and non-biological systems might.

So you think there is a different path that produces the same results of cognition without cognition.

Your argument about incremental progress only makes sense if you are claiming that cognition was inevitable. But, based on the other hominid species, that does not seem to be the case. And, a cheaper strategy would lead you to a local minimum but probably not a global minimum. So, you are either arguing that there is only one path (or that all paths converge) or you are counting on luck.

The argument is not to claim that cognition was inevitable, but that it is possible to obtain in small incremental steps. A partially cognitive system is still useful. Designing increasingly complex and capable learning systems offers a clear path forward. Attempting to design it all in one go does not.

You've missed the point entirely. Name anyone else who has talked about how many memory channels would be needed. It's a short list: zero. Even Kurzweil didn't talk about that.

There is a reason people don't talk about it. It isn't possible for there to be a certain number of memory channels necessary for cognition. If you needed 500, there isn't a reason that 499 would not work. There isn't a reason 1 wouldn't work. It would just run at 1/500th speed. You are out there on your own on your whole temporal theory thing.

You are talking about two different things. If you want a GCS to function like a human in this environment then you have to maintain the temporal boundaries at that level. You don't really have much choice. Then you start talking about simulating the environment, which of course, is a completely different environment and therefore different temporal boundaries. It is an interesting question about whether or not you could demonstrate a GCS using, say, one processor. What would take one second in the brain would then take perhaps twelve days. A years worth of running time would be about 32 seconds of brain time. Of course a rat is about 1%. Demonstrating 31 seconds of rat brain time would take about 3.7 days. Of course, then you have the environment, senses, and memory to simulate. That puts you up to 280 days. Five seconds of rat brain time would take a month and a half. Would that be worthwhile?

Absolutely. From a point of something working at a reduced speed, you could then improve on it and improve on the hardware. Why are you so hung up on execution rate?

Actually, no. I've already considered massively parallel systems. Red Storm dates back to 2005. The limitation of speed in these systems is LAN bandwidth. I've considered lightweight systems and novel architectures like Sony's Cell processor. I've also considered GPU systems. These all have limitations that would keep them well below the peak processing speed. I think that the problems can be solved but not by any of the architectures you've mentioned.

If you've considered execution architectures that are much more efficient as far as filling memory pipelines, why would you describe in detail the most inefficient architecture? Methods of keeping memory pipelines full on modern GPUs are pretty well understood and it's actually very difficult to develop algorithms that aren't crushed by racks of GPUs. (See scrypt)

Additionally, no matter what your workload, you will typically have a bottleneck of one sort or another.
 
You use pejorative terms and are making conclusions about my motives and character based on nothing.
My disbelief and exasperation about the things you were writing made me engage with an already extant line of questioning about your motives and character. As I acknowledged in my last post, I can understand that you felt attacked, and I understand that some of those terms can be taken as 'pejorative'. On the other hand, having worked with a lot of people who exhibit unusual and sometimes problematic cognitive styles, I have probably become used to thinking of some of those terms as morally neutral (and some of them are terms you yourself introduced). An exception, perhaps, is "troll" or "trolling", which isn't yet part of mental health diagnosis. It probably will be before too long.

I apologise for the offence. I hope you will forgive me and can understand that when someone is perceived as repeatedly presenting self-contradictory statements and going off on tangents that blatantly don't answer significant questions, in response to patient, clear and evidenced challenges from educated people in the field being discussed, one reasonable working hypothesis (NOT "conclusion" as you said above) is that the person may be trolling. People do. Millions of them. By which, of course, I don't mean the people who go online just to insult and make threats, but (often very very clever) people who like to avoid logical conclusions of things they've said, put contradictory statements, etc. One common role that such people play out is that they're on the brink of solving, or have solved, or can solve, a major scientific or philosophical issue of the day. Other common traits include mentioning their great intellect (or "specialness"), introducing a wide range of expert topics, presenting short, hit-and-run critiques of prominent writers on these subjects, and jumping around from topic to topic.

I am sorry, however, that I didn't find a better, more level-headed way of expressing my concerns about the content of your posts and challenging you to consider your motives. I am glad, though, that I did put these things to you in one way or another. At least for my sake, I'm learning something about my own discursive style and how I should improve it, and some of what you write here gives me a better understanding of where you're coming from and why I perceive you the way I do. I am reassured that you're probably not trolling. I hope you can also understand that part of my motive for the challenge was for your sake, because you seem to be failing to make progress in these discussions and in whatever is the goal of your project. I would like you to succeed more in whatever you do, and I'm worried that you've become obsessed trying to square a circle.

A few years ago I was trying to write fiction with someone else. She told me that she could only work on one story at a time or she would get the characters confused. I was working on about two dozen stories so that wasn't an issue for me. I can't really imagine getting the characters confused although I can understand that someone else could.

I don't know ahead of time what might be useful so I cover a lot of ground in different areas. Meandering from topic to topic is how I think. It isn't confusing to me but I can understand that it might be to someone else. For example, I watched J. P. Moreland - Loving God With All Your Mind . This is a lecture on apologetics. If you go to 26:28 he starts talking about world views and types of knowledge. This is standard philosophy. His specific framework for knowledge wasn't useful to me. However, I needed something that went beyond Shannon's use of information. Many of the ideas in philosophy are not directly useful to me because they require too many assumptions before you can start (presupposition). On the other hand, computational theory doesn't really include these types of concepts. So, you borrow an idea from philosophy and then approach it with the same structure that you would use in computational theory or math. Likewise, behavioral science is lacking the foundation that would explain what is going on, but it has mountains of evidence. So, if you create an idea about cognition the obvious way to see if it is ludicrous or if it fits somehow is to try to match it with evidence from behavioral science.
Yes, this is part of our difficulty. I, too, get "characters" (or issues) confused if I try to deal with too many stories. And I understand that creativity thrives often on this brainstorming process, lateral thinking, shoving an idea from one thing into a different area and finding inspiration in unusual places. However, synthesis just leads to a messy kludge without careful, logically rigorous analysis. There's a problem in human cognition sometimes referred to as "over-inclusion", where we're too hungry for those connections and attribute significance to things and so try to squeeze them into our theory, and our excitement at the project we're engaged in causes us to unconsciously tune out our scepticism, glossing over things that don't fit. I'm sure you're aware of these things.

Recently being November, 2013?
The point of the hypothesis I gave (that, since you've always known you were special, that itself doesn't mean you couldn't possibly be motivated by unhelpful things like ego) was not to suggest that it was true: it was to get you to follow the logic of your own illogical statements to see where they broke. (Nor am I saying I'm immune to logical errors.)

Given the subject matter, your repeated failure to follow logical flows like this is alarming.

It's possible. I would have to ask my mother though since I didn't participate in choosing my name when I was born.
Sorry, I couldn't resist. At least it helps me remember your username.

Yet, what is interesting is that while trying to explain this to you I realized why emotions work better than fixed machine states. Thank you. I guess I should write that down.
I'm sort of glad it helped, except that I'm worried that you only think it did. Yeah, here we go again - emotions aren't fixed machine states. Do you understand why people read statements like this and fail to see an alternative that isn't woo? You do this with "consciousness" and "thinking" and "cognition" - you always seem to be saying that current AI theory can't do these....and then you go back to believing you can dream up some alternative that isn't dualism. I may still have misunderstood, of course. I was just reading a blog post that said that a lot of neuroscientists were now coming to the view that consciousness might depend on a particular speed of processing, for example, and this reminded me of a similar argument you made. OTOH, I think I grok RussDill's logic that this suggests a hiatus in the laws of physics. Bottom line, I don't know nearly enough about all this to give an informed opinion.

No, that's quite wrong. The brain is just matter and the mind or consciousness is entirely a function of the brain. The second part you are talking about is whether or not cognition can be duplicated using only computational theory or whether it would require something else. I'm still quite certain that cognition is possible with a machine even if it isn't a Turing/Von Neumann machine.
The emoticon :jaw-dropp doesn't begin to cover it. See, part of my irritation isn't your fault - it's from the endless parade of woos with whom I've had these kinds of conversations. It's always "something else" missing, especially in discussions about human consciousness and cognition. Current technology isn't there yet, current theorists are all wrong, but you're working on the problem. You're not pointing at a soul, it's all physics, but just not the physics we know about...

Again, this isn't correct. Many of the characterizations of my ideas aren't even close to my ideas. So, although those ideas are wanting, they aren't mine. I've even had people repeat the same thing that I've said and then give that as a reason why they disagree with me.
All I can say is that I've been amazed several times when you've claimed this. You say something like "You just repeated what I said as if to refute me!", and it seems the other way round to me. If this is genuine, I apologise, but since several people are taking your statements the opposite way round (and, incidentally, this may also contribute to them thinking you contradict yourself), then it may be a communication difficulty you could work on. Frankly, I'm still struggling not to see it as goalpost mobility (or goalpost entanglement).

You are also lumping into that group the often stated conclusion that an idea must be wrong because I haven't proven it even if there is no proven alternative. I would ask you to quote or link some of these but we both know that you can't.
I've genuinely lost track of that point, but if you want me to give a quote or link, please put it to me again.

This is about as good of a mangled statement as I have seen. You are confusing different things. Artificial Intelligence theory has been quite productive in what it has been doing since 1956. It has made a number of advances. The Computational Theory of Mind suggests that in time this will develop into something indistinguishable from consciousness. This is also called General AI. In other words, if this idea is correct then a General AI would be capable of acting and reasoning just like a human. I'm saying that this idea is incorrect, that no matter what you do with AI, it will never reason like a human or be indistinguishable from a human. I'm saying that you need an alternative theory and some differences in hardware from what you have with a Turing/Von Neumann machine. In other words, I'm saying that no existing computer based system is capable of becoming conscious or acting in a conscious manner. However, you could build a machine that would do this using different hardware and different foundational theories.
And I'm saying that you have given no, or almost no, indication (as far as I've seen):
1) why you see computational theory as incapable of producing consciousness,
2) what your alternative actually is,
3) what, even in vague terms, could differentiate a machine that could do the job (which I think is the point RussDill says you covered in detail - that such a machine would abuse the laws of physics according to known computational theory), and, if I dare be so bold,
4) that you have a clear enough or deep enough or wide enough understanding of the subject to make the extraordinary claims you make.

It's no good just saying you could come up with an alternative theory and alternative machinery. If current theory is demonstrated as philosophically/mathematically/physically incontrovertible, any new theory has to incorporate it (in the sense of a paradigm shift). If current theories are simply wrong, and you know why, then, as has been pointed out several times, you have a number of major awards on their way.

Another slant on this, which has already been presented in several ways, is that you lump far too many things together. In the above, you mention the ideas "reason like a human", "be indistinguishable from a human", "becoming conscious" and "behaving in a conscious manner" as if they were interchangeable. As has been pointed out, machines already "reason like a human" (if this is parsed in one way). I'm pretty sure nobody expects a machine to be indistinguishable from a human as a necessary condition of strong AI. We arguably will have no way of knowing if a machine has become conscious, if we take the subjective definition thereof. We may, however, find one that behaves in a "conscious manner", if by that we mean a manner that suggests it is conscious. Some of them already do, to a limited degree, if you squint the right amount.

Well, if I'm not working on it in secret then why aren't you aware of what I'm working on? Secondly, I tried discussing it in more detail in the Foundations thread and no one cared about that. Clearly, neither claim is accurate.
That's right - when I say you're discussing your ideas without really discussing them, I'm contradicting myself and should be ignored.:rolleyes:

Well, again, I could ask for you to quote this but once again we both know that you can't. This is little more than yet another baseless conclusion.
Do you remember what point you were answering when you started talking about various Homo species? It may be a result of your multi-threaded thinking style and my inability to follow the characters in stories, but I have no idea. It seemed like side-stepping and trying to avoid answering. And when asked what the point of that tangent was (and a reference to "civine"), you responded with another tangent, Drake's equation. If it makes sense to you, then maybe you just need to slow down and explain all the missing gaps to mere mortals. It looks like obfuscation and sophistry. And that's not my fault or lack of civinity:D.

The reality is that there are vast differences in assumptions between me and other posters on this board, meaning the posters who actually contribute. If General AI and the Computational Theory of Mind is correct then presumably you could build a gigantic machine out wood and pegs, using water wheels to give it energy. In other words, a machine using no electrical or electronic parts. Now, I doubt this idea is at all unusual to Russ or Clinger. I assume that they would accept that this type of machine would be possible. But the idea bothered me; I don't think the same way they do.
Don't assume it doesn't bother them. I think they are following logically from established principles and presenting what TLOP and computational theory necessarily imply. They might find it mindboggling. I know I do. You write like someone who is fairly new to contemplating how weird emergence is. We're all pretty freaked out. Being a rational sceptic, paradoxically, involves a certain amount of trust in the process of working out certain things culturally. Similarly, it's no good me saying I'm uncomfortable with imaginary numbers and therefore that's probably a rubbish theory and I can probably come up with an alternative one.

But, a feeling isn't worth much; you actually have to prove it.
Yes, and/or the corollary, disprove current theory.

So, I first started working on availability. This relates to how reliable a machine is versus its complexity. It occurred to me that by the time a machine of this sort was complex enough to do work it might be so unreliable that you couldn't keep it running. That seemed like a possible answer.
This reminds me of the idea that, contrary to the early days of AI when people thought we could use computers to understand consciousness or "real" cognition or "free will", it's now looking more likely that by the time we can build those machines, if we ever do, we would have to give their processing such depth and complexity (nested neural nets and god knows what) that the relevant details would be obscure anyway.

However, I still wasn't satisfied. But, eventually I came up with the temporal boundary equations.
"The" temporal boundary equations? Nice. I'm guessing this will be one of those parts you can't talk about.

Those indicated that such a machine would only work in an open environment if it was fast enough.
Again - we're not getting any of the missing steps in your thinking. It would only "...work..."? Be conscious? Cognate? Not bore the operator to death? Are you referring to the point that consciousness requires a certain processing speed or what? How do we know the ocean doesn't very slowly consider the Moon and write sonnets to her on the beaches?

That made it impractical to build. Now, Russ wouldn't care about that because he assumes that you can always simulate an environment slow enough to stay within the boundaries. That is true. However, I still liked the proof because it refuted some concepts in science fiction that had been around for awhile. I worked on refuting other concepts like building an intelligent organism out of bacteria or getting nanites to work. Again, I doubt Russ cares much about these ideas but refuting them was important to me.
And we'll have to wait for publication to really know if you've refuted anything.

Let's be honest. Beelz was attacking my character and doing it just as you have been without any reason or evidence.
I don't consider that honest.

Both of you get irritated and so you start making up things about me being a troll or whatever. Maybe it makes you feel better, but it is still childish and it definitely degrades my respect for you. This is not difficult to understand and I'm sure that both you and Beelz do understand it regardless of how you act.
I hope I'm going some way towards disabusing you of that perception. It's really very simple. I'm sure you've done enough logic by now to recognise a conditional. "If he was a troll, that would explain most of what I'm seeing here" isn't accusing you of being a troll or making things up about you.

I could easily be wrong about my ideas.
This would be useful to say earlier in future discussions, and bear in mind generally a bit more than you appear to when you go on about "refuting" this and that and "proving" the other.

Your claim that I don't listen to anyone is patently absurd. When Nonpareil expressed confidence in Integrated Information Theory, I spent time studying it until I understood why it didn't work.
What, IIT didn't work, or your idea related to it? Don't tell me that even when you give an example of you learning, you give one where you're refuting the thing you're learning about!

So, that's why I've asked what step 2 would be. Perhaps I'm overlooking something.
Yes, several things.
a) YOU said exactly steps 1 and 3 yourself last August, and you didn't have any compulsion to add a step 2, so why ask me for step 2? I don't see anything that would obscure it when I said it and not require explanation when you said it.
b) the missing step is that if you publish it, presumably you won't hide significant parts of it necessary for people to understand (and indeed discuss) it, and instead throw in something else every five minutes, at least not within its pages,
c) in the context of you being fed up not getting the response you want when you said it, my response meant something like "Make up your mind - either show enough of it or shut up and work on it until you can". I am putting "publish" and "discuss adequately" in the same category, the enabling fruitful discussion category. I'm contrasting that with what I perceive you doing here.
 
Further down:
Okay, you have this:

The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals.

This again agrees with what I said.

The cerebellum has a different structure from the cortex. It has been suggested that the structure of the cerebellum acts as a type of perceptron. It is known that the cerebellum has something to do with physical skill. However, there are unknowns. It has even been suggested that the cerebellum has something to do with cognition since it is larger in mammals. I would probably speculate that the larger cerebellum is related to variety rather than skill. For example, predatory fish seem fairly adept. However, mammals seem to have a wider range of adaptive behaviors.
 
These two statements are contradictory. I thought we covered this. In detail. Repeatedly. Under the currently known laws of physics, it is not possible to build anything that cannot be simulated with a Turing machine or a Von Neumann machine. You are claiming that the brain works according to laws of physics and even mathematics/logic that are unknown.
People don't usually make statements like this which are not only false but known to be false. First of all, the Computational Theory of Mind has never been proven. You would have to prove that first before you could make any other claim. Secondly, no computer can generate entropy. Enumeration theory which is the foundation of computational theory does not cover entropy. However, it is possible to generate entropy outside of a computer and then sample the value. That isn't simulation. I would ask you to explain how you could simulate entropy with a computer but we both already know that it isn't even theoretically possible.

However, you are pretty sharp so I would imagine that your next assertion would be that while it isn't possible to generate a truly entropic sequence of infinite length, it might be possible to generate an entropic sequence of finite length; and, since the universe has a finite age, that would be good enough. The only problem with this assertion is that you can't use a post hoc definition of entropic sequence. This only works if you know ahead of time that the sequence will be entropic before it is generated. And you are probably also aware of the ratio of n bits to 2n values. You could naively suggest that all you would need to do is start with a sufficient length of n for the seed to provide the sequence you need. Unfortunately, there is no known proof for insuring that a sequence won't repeat regardless of the size of the seed. You can determine this after the fact but that requires infinite memory and infinite speed. So, the best you could argue is that it might be possible but that it remains unproven. Unfortunately, my ideas also might be possible but remain unproven. So, you probably won't argue that.

There are a number of people who would like a word with you if you have discovered a way to compute things not possible on Turing machines or Von Neumann machines. Stop bothering working on the whole cognition thing, take a break and provide piece of hardware that solves the one of the undecidability problems.
I have no idea what undecidable problems would have to do with anything I've said. I've never made the claim that a GCS is capable of solving undecidable problems. I pointed out before that if cognition is truly a subset of computational theory (as some assume) then there should be things that a computer could do that a human could not. Other than some smart-ass answers, no one was able to think of any. If you can't think of even one thing that a computer can do but a human cannot then it would seem that cognition is at least equal to computational theory. However, it would also be possible that cognition is greater than computational theory. In other words, it is possible that computational theory is a subset of cognition.

Not only that, but humans regularly exist in a state where they are cut off from environmental input, but still engage in cognitive activities.
This is an example where you repeat what I've said and yet give it as a point of disagreement.
 
Here's another problem where it depends on how you define it. Plenty of lower organisms respond in random ways to environmental stimulus. Additionally, as soon as you have any form of memory, even with a only few neurons, the organism can react based in ways that depend on past stimulus, not merely current environmental stimulus. So a sea sponge has developed a "loss of behavioral correspondence". Heck, even certain protozoa utilize proteins to modify their response to stimulus based on past experience.
That's true but it also doesn't change the point. Loss of correspondence is an obstacle to cognition. You understand statistics. If you have a fixed strategy with a high rate of success then you stick with it. It won't work every time but it has a good probability of success for your offspring. As you lose correspondence you no longer have a fixed strategy and you have to replace it with something else. This is not difficult to understand. For example, some walruses hunt seals instead of eating shellfish. That's a learned instead of fixed behavior. But getting back to your point, it is not a self-learned behavior; it requires a parent. I know that juvenile beavers spend an entire year working in the family pond before striking off on their own. Apparently it takes that long to learn all of the necessary things. In contrast, a snake seems to be born with almost all of its behavior, so it has a high degree of correspondence.

Snipped a bunch of information that does not answer "How you've come about information on identity paradox on Homo Heidelbergensis, Neanderthals, and Denisova" (what research papers, etc). And "the definition of the words civine, and civinity."
That isn't what I said. I apologize but when concepts are clear to me I tend to assume that they are clear to others.

I have the concept of identity paradox. It's a theoretical problem in cognitive theory that would inhibit developing greater cognitive ability.

Does it exist? Is it real or are we talking about a unicorn? Well, since we only have one organism (humans) who would be above this threshold, that's hard to tell. The other hominids like Heidelbergensis, Denisova, and Neanderthal could have been at the identity threshold.

Unfortunately, we don't have any of these hominids handy to do behavioral studies to find out. So, is there any other way to tell?

I pointed out in the Drake equation that it includes the concept of civilization. And, you could use the word civine as an adjective meaning 'having the characteristics of building a civilization' (because other derivatives like 'civilized' have a different connotation). However, that would be a post hoc definition so it wouldn't be very useful. But, you could look at other characteristics like what organisms do with surplus resources to try to create a predictive definition or model of civinity.

That then leads you to the question of whether civinity is directly related to the identity paradox. Would it be possible to get past the identity threshold without being civine? Would it be possible to be civine but below the identity threshold? Since there don't seem to be any other hominids that were civine we can probably assume that civinity is not possible below the identity threshold. Now again, this does not prove that the identity threshold exists but you still have to question why only Sapiens became civine. In other words, if it is a short step from Heidelbergensis to Sapiens as suggested by the time span over which Sapiens developed then why didn't it happen more often? The identity paradox is a theoretical reason.

So you think there is a different path that produces the same results of cognition without cognition.
That wasn't what I said. We could always assume that human cognition was the result of limited choices in evolutionary development based on, say, the brain structure of our vertebrate ancestors and the necessity of survival within the environment. However...when you start talking about incremental advancements with neural networks or AI then no such limitations or necessities exist. So, how do you argue that this is likely to result in cognition instead of branching off into any of the vast number of local minimums? When you play on a golf course then the sequence of holes from 1-18 will lead from beginning to end. But, let's say you were playing on a course that was 10,000 square miles and contained millions of holes and only a single #18. How would you know where you were at any given point in time?

The argument is not to claim that cognition was inevitable, but that it is possible to obtain in small incremental steps. A partially cognitive system is still useful. Designing increasingly complex and capable learning systems offers a clear path forward. Attempting to design it all in one go does not.
You seem to still be making the assumption that you are on a single path. You could be making progress along a path that will never lead to a cognitive system.

You are out there on your own on your whole temporal theory thing.
True, I'm out here on my own in a world where not a single theory exists to design a thinking machine. And, where not one project exists that is on a theoretical track to develop such a machine. I could be working on something that simply won't work but it is unquestionably the case that there is no one ahead of me.

Absolutely. From a point of something working at a reduced speed, you could then improve on it and improve on the hardware.
I'll have to consider whether or not a working programming model could demonstrate something.

Why are you so hung up on execution rate?
There are questions that have to be answered. Why are humans the only species with high intelligence and reasoning? I've wondered why chordates seem to be smarter. I think that might be related to hemispherical development. But, dinosaurs were around for a long, long time, much longer than mammals. So, why weren't there dinosaur civilizations? I don't want to resort to special pleading or a claim of a lucky accident. I think the primary limitation was fetal brain development in species that laid eggs. That would explain why mammals seem to be smarter. But then the job gets tougher as you try to explain why primates seem to be smarter and hominids and humans. But you also have to ask why cognition would develop in the first place and what stopped it from developing faster. One of these limitations is speed. And, in terms of computers, speed is also an issue. We no longer seem to be increasing clock speed. I also pointed out that there has been no increase in memory speed in 20 years for data that isn't predictable. These problems are similar with both evolutionary development and for new processing systems.

If you've considered execution architectures that are much more efficient as far as filling memory pipelines, why would you describe in detail the most inefficient architecture? Methods of keeping memory pipelines full on modern GPUs are pretty well understood and it's actually very difficult to develop algorithms that aren't crushed by racks of GPUs.
Again, you are not talking about the same things I am. My late wife was a computer operator. I'm quite familiar with batch jobs. You can always arrange these for most efficient operation. That isn't what I'm talking about.

The oldest application relating to what I am talking about is probably the AMD Froblins demo from 2008. There's a technical description of it here and you can see a video of the demo here. It uses the GPU as the workhorse processor to calculate movements, decisions, and the graphics. This is related because the actions of the Froblins isn't predictable; the data has to be processed in real-time.

Additionally, no matter what your workload, you will typically have a bottleneck of one sort or another.
Well, yes, but you still have to be within the temporal boundaries in order to have cognition. This is not a difficult concept. You need a minimum amount of torque for a motor to turn, you need a minimum tensile strength for a rope not to break, and you need a minimum voltage to overcome resistance. But it gets far worse with electronic devices. A television set that could not process the NTSC 170A signals faster than the receiving rate would not have worked. Curiously, this was exactly why the Intel 8008 was put on the market; it was too slow to be used for its intended purpose, a CRT controller. But, as you've already stated many times, typical computing functions are not time sensitive so it was useful as a processor.
 
I'm just wondering what the issue is with processing speed in relation to human consciousness, cognition, reasoning, etc., and their artificial analogues. I think what we call consciousness is pretty slow, rather crude models of self and other and relationships thereof, so it doesn't actually require much bandwidth at all. I don't know what consciousness is, of course, but nor does anyone else, apparently. The way I think of it is a bit like a database query (I think I read the idea in Susan Blackmore). Most of the day, you're not conscious. If someone asks you "Are you conscious now?", you'll become conscious, indeed self-conscious or self-aware, but all day you were living on auto-pilot. The dichotomy between awareness and self-awareness may even be false, but that's perhaps another issue.

The more we learn about other animals, the less impressive is the difference between us. Something happened at the "cognitive revolution" about 70,000 years ago, perhaps a mutation, maybe something else, and it allowed us to communicate in a new way with our fellow humans, about high-level abstractions, things that don't exist rather than things we can point at and name. We began making social and religious mythology. But none of this requires fast processing, does it? Walking, running and catching prey takes much faster processing. Being a rational "conscious" human involves some simple concepts, referred to from time to time when they're useful, like "me" and "you" and "here" and "there". Where's the need for speed or large amounts of memory? Maybe I'm missing something.
 
Snipped a bunch of information that does not answer "How you've come about information on identity paradox on Homo Heidelbergensis, Neanderthals, and Denisova" (what research papers, etc). And "the definition of the words civine, and civinity."
That isn't what I said. I apologize but when concepts are clear to me I tend to assume that they are clear to others.
When you invent your own personal definitions for words, you shouldn't assume their meanings are clear to others.

For example:

And, you could use the word civine as an adjective meaning 'having the characteristics of building a civilization' (because other derivatives like 'civilized' have a different connotation). However, that would be a post hoc definition so it wouldn't be very useful. But, you could look at other characteristics like what organisms do with surplus resources to try to create a predictive definition or model of civinity.
I take that to mean you are using a word (civinity) with a meaning you invented yourself. That is not how effective communication is done.

That then leads you to the question of whether civinity is directly related to the identity paradox. Would it be possible to get past the identity threshold without being civine? Would it be possible to be civine but below the identity threshold? Since there don't seem to be any other hominids that were civine we can probably assume that civinity is not possible below the identity threshold. Now again, this does not prove that the identity threshold exists but you still have to question why only Sapiens became civine.
Questions can be asked clearly, but questions can also be asked in a way that obfuscates the question. Using the word "civine" to ask "Why is Homo Sapiens the only species to have built a civilization, as civilization is defined by most members of species Homo Sapiens?" is not the clearest way to ask that question.

When you play on a golf course then the sequence of holes from 1-18 will lead from beginning to end. But, let's say you were playing on a course that was 10,000 square miles and contained millions of holes and only a single #18. How would you know where you were at any given point in time?
A golf course with millions of holes would not be a regulation 18-hole course.

That remark has just as much to do with cognition as your question has to do with cognition.

I could be working on something that simply won't work but it is unquestionably the case that there is no one ahead of me.
Many roads lead nowhere. Most are so seldom travelled that you are unlikely to see anyone ahead of you.
 
I'm out here on my own in a world where not a single theory exists to design a thinking machine. And, where not one project exists that is on a theoretical track to develop such a machine. I could be working on something that simply won't work but it is unquestionably the case that there is no one ahead of me.
Here's one. J E Tardy
 
When you invent your own personal definitions for words, you shouldn't assume their meanings are clear to others.

The post you are referencing isn't a formal paper submitted to a professional journal. If it were then terms would be defined and arguments would be clearer. But you knew this before you posted. Surely you aren't singling out my post to rail against the informality of the board. So, what was the purpose of your post?
 
That explains a lot about your position.


Instead of constantly getting defensive, maybe you ought to simply respond to the link he posted. It might also be helpful to unpack these pithy little comments a little just so we know that what appears to be a pejorative statement isn’t actually a pejorative statement.

…unless it is.

I’d be curious to know how that link does not represent ‘someone’ who seems to, at the very least, be on the same road…if not ahead of you.
 
The post you are referencing isn't a formal paper submitted to a professional journal. If it were then terms would be defined and arguments would be clearer. But you knew this before you posted. Surely you aren't singling out my post to rail against the informality of the board. So, what was the purpose of your post?


Maybe W.D.Clinger was just being juvenile (or something else)…but you must have known that no one would have a clue as to the meaning of those words you used (I certainly didn’t).

…so why did you not make some effort to define them? What’s the point of making arguments if you know that your readers cannot comprehend them? Perhaps you were simply so caught up in expressing yourself that you completely overlooked the fact that no one had ever heard those words before. I know creative people who behave that way.
 
I'm out here on my own in a world where not a single theory exists to design a thinking machine. And, where not one project exists that is on a theoretical track to develop such a machine. I could be working on something that simply won't work but it is unquestionably the case that there is no one ahead of me.


That explains a lot about your position.

Which is...sceptical and a bit pedantic?
 
Okay, you have this:

The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals.

This again agrees with what I said.

I'm not sure how you think that confirms that the skill of bipedal motion is contained within the cerebellum. I just linked to a study that shows that the cerebral motor cortex is utilized in rhythmic bipedal motion.
 
People don't usually make statements like this which are not only false but known to be false. First of all, the Computational Theory of Mind has never been proven. You would have to prove that first before you could make any other claim. Secondly, no computer can generate entropy. Enumeration theory which is the foundation of computational theory does not cover entropy. However, it is possible to generate entropy outside of a computer and then sample the value. That isn't simulation. I would ask you to explain how you could simulate entropy with a computer but we both already know that it isn't even theoretically possible.

However, you are pretty sharp so I would imagine that your next assertion would be that while it isn't possible to generate a truly entropic sequence of infinite length, it might be possible to generate an entropic sequence of finite length; and, since the universe has a finite age, that would be good enough. The only problem with this assertion is that you can't use a post hoc definition of entropic sequence. This only works if you know ahead of time that the sequence will be entropic before it is generated. And you are probably also aware of the ratio of n bits to 2n values. You could naively suggest that all you would need to do is start with a sufficient length of n for the seed to provide the sequence you need. Unfortunately, there is no known proof for insuring that a sequence won't repeat regardless of the size of the seed. You can determine this after the fact but that requires infinite memory and infinite speed. So, the best you could argue is that it might be possible but that it remains unproven. Unfortunately, my ideas also might be possible but remain unproven. So, you probably won't argue that.

To prove the computational theory of mind, you'd have to either prove that the currently known laws of physics are complete, or you'd need to show something similar to a human mind operating according to the known laws of physics. There is very little wiggle room here where people can claim that biological processes operate in ways which violate the known laws of physics.

I think it's interesting that you think that true random numbers are the magic ingredient which makes cognition and consciousness possible. Lets say we had a piece of hardware that can simulate everything about the human brain, except it cannot produce true random numbers. Now, we figure out to run it for an hour, we need 8TB of random data. On one storage device, we store the output of a true random number generator. On another storage device, we store the output of a high quality pseudo random number generator. You are saying that one input would produce cognition, the other would not. How many bits can we change in the true random number data set before cognition stops working?

This leads to an interesting result, that circles back around to the question of entropy being able to be simulated. We run the hardware with a pseudo random input. As we run, we detect if it is working, or failing. If it works, we continue, if it fails, we generate a new pseudo random block. We could then not only produce cognition without any source of true entropy, but we could use a computer to generate true random numbers. We've now reached reducto absurdum. We've generated a sequence of numbers by using a defined algorithm. The numbers are by definition psuedo random. This being a mathematical proof, we don't even need to build the hardware. We can just say for each possible sequence of numbers storable within the 8TB of storage, run the cognition program, all results where the cognition program succeeds are "true random numbers"

I have no idea what undecidable problems would have to do with anything I've said. I've never made the claim that a GCS is capable of solving undecidable problems. I pointed out before that if cognition is truly a subset of computational theory (as some assume) then there should be things that a computer could do that a human could not. Other than some smart-ass answers, no one was able to think of any. If you can't think of even one thing that a computer can do but a human cannot then it would seem that cognition is at least equal to computational theory. However, it would also be possible that cognition is greater than computational theory. In other words, it is possible that computational theory is a subset of cognition.

This again explains so much. You keep using terms related to computer science, but you don't seem to know what they mean.

https://en.wikipedia.org/wiki/Undecidable_problem

"In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is known to be impossible to construct a single algorithm that always leads to a correct yes-or-no answer."

Running such algorithms is what Turing machines and Von Neumann architecture computers do. If you are saying that there is a machine that produces a result, and a Turing machine is not capable of producing that same result, you are declaring that it is an undecidable problem.

And BTW, the claim is not subset, but that cognition is Turing equivalent. This ignores the limited storage capability of the human brain though. Ignoring storage limitations, there is nothing a human can do that a Turing machine cannot and vice versa. Of course, once you add in the storage component, there are plenty of things that computers can do that humans cannot, or things that computers can do much better than humans. Chess matches against computers are now actually handicapped in favor of the human grand master.

http://en.chessbase.com/post/komodo-9-odds-matches-against-gms

This is an example where you repeat what I've said and yet give it as a point of disagreement.

OK, humans are still able to perform cognitive capabilities when cut off from their environment. Then you agree that there is no temporal requirement to generating cognition and consciousness, glad we can move past the whole "temporal theory" thing.
 
That's true but it also doesn't change the point. Loss of correspondence is an obstacle to cognition. You understand statistics. If you have a fixed strategy with a high rate of success then you stick with it. It won't work every time but it has a good probability of success for your offspring. As you lose correspondence you no longer have a fixed strategy and you have to replace it with something else. This is not difficult to understand. For example, some walruses hunt seals instead of eating shellfish. That's a learned instead of fixed behavior. But getting back to your point, it is not a self-learned behavior; it requires a parent. I know that juvenile beavers spend an entire year working in the family pond before striking off on their own. Apparently it takes that long to learn all of the necessary things. In contrast, a snake seems to be born with almost all of its behavior, so it has a high degree of correspondence.

I have no idea why you refer to this an obstacle. Clearly animals that react based on past events are more successful. Their only handicap is the additional biological requirements of supporting a neural network. In fact, correspondence is a handicap. If an organism always responds the same way, predators will quickly adapt. Back on point though, this whole "loss of correspondence" thing seems to be a red herring. It's been around since before chordates.


That isn't what I said. I apologize but when concepts are clear to me I tend to assume that they are clear to others.

I have the concept of identity paradox. It's a theoretical problem in cognitive theory that would inhibit developing greater cognitive ability.

Does it exist? Is it real or are we talking about a unicorn? Well, since we only have one organism (humans) who would be above this threshold, that's hard to tell. The other hominids like Heidelbergensis, Denisova, and Neanderthal could have been at the identity threshold.

Unfortunately, we don't have any of these hominids handy to do behavioral studies to find out. So, is there any other way to tell?

I pointed out in the Drake equation that it includes the concept of civilization. And, you could use the word civine as an adjective meaning 'having the characteristics of building a civilization' (because other derivatives like 'civilized' have a different connotation). However, that would be a post hoc definition so it wouldn't be very useful. But, you could look at other characteristics like what organisms do with surplus resources to try to create a predictive definition or model of civinity.

That then leads you to the question of whether civinity is directly related to the identity paradox. Would it be possible to get past the identity threshold without being civine? Would it be possible to be civine but below the identity threshold? Since there don't seem to be any other hominids that were civine we can probably assume that civinity is not possible below the identity threshold. Now again, this does not prove that the identity threshold exists but you still have to question why only Sapiens became civine. In other words, if it is a short step from Heidelbergensis to Sapiens as suggested by the time span over which Sapiens developed then why didn't it happen more often? The identity paradox is a theoretical reason.

So in other words, you have nothing, just a lot of what if this, what if that. And you still haven't defined identity paradox, I cannot find it as being a concept within cognitive theory as you claim it to be. I can only find it in reference to things like the ship of Theseus.

Additionally, you are going on concepts that don't have rigid definitions. Unless you define things rigidly, any results are useless. What's worse, the definition of civilization actually mentions humans. So no, nothing non-human can form a civilization. Without a firm definition, ants or bees could be considered to form civilizations. By other definitions, groups of primates could be said to form civilizations, especially those that modify their environment.

Additionally, humans existed as cognitive, conscious creatures without any form of civilization and continue to do so in many places. Civilization appears to be a side effect, so I'm not sure why you are spending so much detail on it.

That wasn't what I said. We could always assume that human cognition was the result of limited choices in evolutionary development based on, say, the brain structure of our vertebrate ancestors and the necessity of survival within the environment. However...when you start talking about incremental advancements with neural networks or AI then no such limitations or necessities exist. So, how do you argue that this is likely to result in cognition instead of branching off into any of the vast number of local minimums? When you play on a golf course then the sequence of holes from 1-18 will lead from beginning to end. But, let's say you were playing on a course that was 10,000 square miles and contained millions of holes and only a single #18. How would you know where you were at any given point in time?

You seem to still be making the assumption that you are on a single path. You could be making progress along a path that will never lead to a cognitive system.

Clearly we are going down many many paths simultaneously, with many of them being dead ends, but the end result being a gradual improvement.

True, I'm out here on my own in a world where not a single theory exists to design a thinking machine. And, where not one project exists that is on a theoretical track to develop such a machine. I could be working on something that simply won't work but it is unquestionably the case that there is no one ahead of me.

Since cognition is a spectrum, we already have many thinking machines.

There are questions that have to be answered. Why are humans the only species with high intelligence and reasoning? I've wondered why chordates seem to be smarter. I think that might be related to hemispherical development. But, dinosaurs were around for a long, long time, much longer than mammals. So, why weren't there dinosaur civilizations? I don't want to resort to special pleading or a claim of a lucky accident. I think the primary limitation was fetal brain development in species that laid eggs. That would explain why mammals seem to be smarter. But then the job gets tougher as you try to explain why primates seem to be smarter and hominids and humans. But you also have to ask why cognition would develop in the first place and what stopped it from developing faster. One of these limitations is speed. And, in terms of computers, speed is also an issue. We no longer seem to be increasing clock speed. I also pointed out that there has been no increase in memory speed in 20 years for data that isn't predictable. These problems are similar with both evolutionary development and for new processing systems.

In 1998, tRC was 84ns. Today it's about 45ns. However, as discussed above, it doesn't matter that the accesses are not predictable as you can perform many accesses in parallel, hiding the latency and achieving full bandwidth. Additionally, memory latencies have actually drastically improved. If you want access to 16MB of memory very quickly, you can access with a latency of about 4ns using consumer processors. Smaller amounts are accessible with even lower latency. In 1998, accessing 16MB of memory would have required 20 times the latency.

And once again, these issues are very unimportant. Additionally, dinosaurs that lay eggs are capable of very complex cognitive tasks and tool use. Many dinosaur species are only rivaled in intelligence by certain marine mammals, humans, and other primates.

As far as cognition and speed, speed is important when it comes to natural selection. A slow reaction is often as bad as no reaction. In the area of computation theory, speed is not important. Nothing is going to come along and eat your program. Computational speed is another area you seem hung up on for what reason I have no idea.


Again, you are not talking about the same things I am. My late wife was a computer operator. I'm quite familiar with batch jobs. You can always arrange these for most efficient operation. That isn't what I'm talking about.

Good, because I'm not talking about batch jobs either (see WP article on batch processing). I'm talking about using parallelism within a single workload to hide latencies. I'm confused why you keep mentioning memory latency as it is somehow important or a limiting factor.

Well, yes, but you still have to be within the temporal boundaries in order to have cognition. This is not a difficult concept. You need a minimum amount of torque for a motor to turn, you need a minimum tensile strength for a rope not to break, and you need a minimum voltage to overcome resistance. But it gets far worse with electronic devices. A television set that could not process the NTSC 170A signals faster than the receiving rate would not have worked. Curiously, this was exactly why the Intel 8008 was put on the market; it was too slow to be used for its intended purpose, a CRT controller. But, as you've already stated many times, typical computing functions are not time sensitive so it was useful as a processor.

So, with who knows how many cognitive researchers are out there, exactly one has come across the concept of "temporal boundaries" and considered it important, and claims that this key concept that no one else gets is "not a difficult concept". Clearly.

These analogues to torque, tensile strength, voltage, etc, have absolutely no equivilence in computation theory. Please, give me a method for determining exactly many computes (I just made up that term) I need to handle a given algorithm. There is no algorithm that comes with a requirement of how quickly or slowly the steps must be followed. If I'm sitting there with a pencil and paper working out an algorithm, will a timer suddenly buzz and make my results invalid? What you are claiming is absurd.

I can process NTSC 170A signals at any speed I please. Heck, when I go over stuff in my head, I process things mentally 1 clock cycle at a time. Things that in the real world happen in a few billionths of a second.

If the magic of cognition is being able to control a CRT, and the 8008 is a brain, you are saying that the 8008 is too slow. Ah, but it's no problem, just get a slower CRT that functions at a slower clock rate. The 8008 will control it just fine.

And side point, the 8008 was not used in the product in question because it was not ready in time, otherwise it would have functioned just fine. Discrete TTL was used instead. It was the next version of the product where the 8008 was rejected for performance reasons as the newer product was supposed to be faster.
 

Back
Top Bottom