• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Hammering out cognitive theory

barehl

Master Poster
Joined
Jul 8, 2013
Messages
2,655
Hammering on cognitive theory

There really isn't a good place to put this because of the overlap in so many other areas. I guess here is as good a place as any.

I've been working on formalizing knowledge theory. One of the issues I've ran into is AI learning systems. This involves trying to distinguish between cognitive learning and AI learning. However, I haven't been able to find an AI system that can learn. Either I'm just not looking hard enough or the AI definition is very different from what I consider learning to be. The examples that I've found just seem to be indirect reprogramming rather than learning. Is there a learning system that doesn't require a human coach?
 
Last edited:
Most computer "learning" is done by having a program given a training set of examples that have already been categorized by humans (or another program).

So to train a computer to recognize visual images you would have the computer look at thousands of images with some labels attached: 'this is a cat', 'this is a number', 'this is a bicycle', ... and so on.

It's trickier to have a learning program that works with unlabelled examples, but it has been done. One program "watched" YouTube videos and tried to group them into categories. It was quite successful at finding groups we would call "animal videos" and within that group it separated out a "cat video" group fairly well - all this without the programmers ever explicitly programming into the code any goals or definitions like "animal", "cat", etc. I'm sure the program also separated out categories like "car videos" and such - but I suppose that as there are so many cat videos on YouTube, then this was one of the biggest groups that the program found.

https://youtu.be/TK4qLwTye_s
 
Last edited:
Unfortunately, Turing comes back into the picture. I suppose the strongest attribute of Turing Machine theory is its ability to simplify problem sets.

Let's say that I had a stack of punch cards and a punch-card reader. I have an automated system that can take the cards out of the sorting bins and put them back into the loader. I have a measuring device that can determine the height of each stack in each bin. Now I need a couple more things. I need a counter to run through each setting on the punch-card reader. Then I need something to keep track of two measurements, the highest stack value and the counter number at this setting. It automatically reloads and runs through the stack with each setting, then it increments the counter. The current value is compared with the high value and replaces it if the new value is greater. When this completes, it will have the setting that will give the highest grouping. I could claim that the punch-card reader has learned a grouping in a way very similar to the picture grouping of animals. If this is learning then even a punch-card reader can learn.
 
Last edited:
By the same token, I can say that any computer or indeed any brain is made of atoms - so even a bunch of atoms can learn.

Of course the bunch of atoms has to be above a certain size (but remember that honeybees and similar with their tiny brains can learn quite impressively). Most importantly the arrangement of the atoms has to be in the correct pattern.

In one sense it's the pattern that allows the learning. The objects that make up the pattern might be neurons or transistors or relays or mechanical devices - this may affect the size and speed of the learning apparatus - but the ability to learn is determined by the pattern rather than the pieces that make it up.
 
By the same token, I can say that any computer or indeed any brain is made of atoms - so even a bunch of atoms can learn.

Of course the bunch of atoms has to be above a certain size (but remember that honeybees and similar with their tiny brains can learn quite impressively). Most importantly the arrangement of the atoms has to be in the correct pattern.

In one sense it's the pattern that allows the learning. The objects that make up the pattern might be neurons or transistors or relays or mechanical devices - this may affect the size and speed of the learning apparatus - but the ability to learn is determined by the pattern rather than the pieces that make it up.

The OP is using a false dichotomy. There is no sharp boundary between artificial intelligence (AI)and cognitive intelligence (CI).

Even the cockroach can learn. Since cockroaches are not artificial, they can’t show AI. However, I don’t know if the OP would call it cognitive intelligence.

I don't think cockroaches need a human coach. However, some human scientists have coaching cockroaches. You may want to call them Coacharoaches.


Here are some links and quotes.

http://www.bioone.org/doi/pdf/10.2108/zsj.18.21
‘This study shows that cockroaches have an excellent olfactory learning capability characterized by rapid acquisition, extremely long retention and easy re-writing of memory.’


http://www.sciencedirect.com/science/article/pii/0003347270900187
‘Intact cockroaches, headless insects and isolated segments of the animal were trained to avoid electric shocks, contingent upon an electric signal which was applied a fraction of a second prior to the shocks. The results showed that all the intact insects learned to avoid the shocks in approximately ten sessions, on a schedule of two daily sessions and ten trials per session. The retention of the learned responses was further tested; it showed that there was no decrement of avoidance reactions till the end of insect life in the experimental conditions. Furthermore, this retention of the learned behaviour was not affected by the severance of the head nor even by the isolation of a single ganglion, once the behaviour had been established.’


Don’t leave ground-coffee, :blush:, I mean coffee on the ground.

http://news.vanderbilt.edu/2007/09/...he-morning-and-geniuses-in-the-evening-58462/
“Cockroaches are morons in the morning and geniuses in the evening

It is very surprising that the deficit in the morning is so profound,” says Page. “An interesting question is why the animal would not want to learn at that particular time of day. We have no idea.”


http://www.pnas.org/content/104/40/15905.full.pdf
‘We show here that the ability of the cockroach Leucophaea maderae to acquire olfactory memories is regulated by the circadian system. We investigated the effect of training and testing at different circadian phases on performance in an odor- discrimination test administered 30 min after training (short-term memory) or 48 h after training (long-term memory). When odor preference was tested by allowing animals to choose between two odors (peppermint and vanilla), untrained cockroaches showed a clear preference for vanilla at all circadian phases, indicating that there was no circadian modulation of initial odor preference or ability to discriminate between odors. After differential condition- ing, in which peppermint odor was associated with a positive unconditioned stimulus of sucrose solution and vanilla odor was associated with a negative unconditioned stimulus of saline solu- tion, cockroaches conditioned in the early subjective night showed a strong preference for peppermint and retained the memory for at least 2 days.’
 
Last edited:
The OP is using a false dichotomy. There is no sharp boundary between artificial intelligence (AI)and cognitive intelligence (CI).
Is your assertion based on one of the nonexistent machines or on one of the non-working theories? Even the people working on the Brain project admit that they have no idea how to build a thinking machine.

Let's try that again without the wishful thinking. I can explain the evolutionary development of cognition from something like a round worm to a human. There's no point talking about intelligence since there is nothing that compares with human reasoning or even the reasoning of an adult rat. However, if you could name any AI that showed learning that was close to a human then you've advanced theory.

Even the cockroach can learn. Since cockroaches are not artificial, they can’t show AI. However, I don’t know if the OP would call it cognitive intelligence.
That one is pretty easy; cockroaches are not cognitive. As I recall, nematodes can also learn and they are much simpler than cockroaches.

I don't think cockroaches need a human coach. However, some human scientists have coaching cockroaches. You may want to call them Coacharoaches.
I'm not sure what this has to do with anything though. I don't recall claiming that all learning required a coach.
 
Let's try that again without the wishful thinking. I can explain the evolutionary development of cognition from something like a round worm to a human.
No you can't.

That one is pretty easy; cockroaches are not cognitive.
How do you know they aren't? In fact, what does that even mean?
 
No you can't.
What in the world would you base this on? Are you using a Ouija board or are you claiming telepathy?

How do you know they aren't?
Because there are vast amounts of evidence that they aren't.

In fact, what does that even mean?
It has a number of meanings. It takes a specific system in the brain or neural structure to achieve non-cognitive information processing in living organisms. This lets such an organism seek food, mating, or avoid hazards. There are limitations about what type of information is processed and there are limitations about when the information is processed. When you shift to cognitive processing the system becomes more complex, other types of information are processed, and there are changes in when the information gets processed.

Someone could describe heart function with organisms that have a muscular swelling with valves in a main artery to an organism with a two chamber heart to an organism with a four chamber heart. Information processing has a similar increase in complexity and specific changes in the way information is processed from simpler to more complex. This isn't vague or undefined as you are insinuating.
 
Certainly looks like it's learning to me...

Learning-Robotsx299.gif


https://www.technologyreview.com/s/542921/robot-toddler-learns-to-stand-by-imagining-how-to-do-it/

Technology may indeed be ahead of theory ....Wrights had no theory to go on ...just trial and error...theory came after and is still under discussion.
 
Certainly looks like it's learning to me...
I don't know how this is related to what I said.

Technology may indeed be ahead of theory
This is too vague to have much meaning. AI theory is fairly well developed and all of the current technology is well within its bounds. So, there is nothing ahead of AI theory. However, that isn't the topic. Something that could behave more like an adult rat to a human is what would be called General AI. Unfortunately there is neither technology nor theory for GAI. And there is no technology that would be outside of AI but within GCS theory.

....Wrights had no theory to go on ...just trial and error...theory came after and is still under discussion.
That isn't true. They visited Otto Lillienthal and obtained a table of lift and drag for airfoils from him. When they tested them, they found out that they were wrong. So, they built a wind tunnel and generated their own L/D tables. They had ideas about flight after watching Lillienthal. They felt that control was the most important aspect of flight instead of power as Langley was saying. This led them to develop wing warping for banking. I suppose their most formal theory would be the one they developed for propellers.
 
What in the world would you base this on? Are you using a Ouija board or are you claiming telepathy?
I don't have to be telepathic to say you can't do the impossible. The information just isn't there.

What you can do is use comparative neuroanatomy and a taxonomy chart to BS a post-hoc evolutionary hypothesis for everything. But that will be wrong.

Because there are vast amounts of evidence that they aren't.


It has a number of meanings. It takes a specific system in the brain or neural structure to achieve non-cognitive information processing in living organisms. This lets such an organism seek food, mating, or avoid hazards. There are limitations about what type of information is processed and there are limitations about when the information is processed. When you shift to cognitive processing the system becomes more complex, other types of information are processed, and there are changes in when the information gets processed.

Someone could describe heart function with organisms that have a muscular swelling with valves in a main artery to an organism with a two chamber heart to an organism with a four chamber heart. Information processing has a similar increase in complexity and specific changes in the way information is processed from simpler to more complex. This isn't vague or undefined as you are insinuating.
This is just special pleading. "Cockroaches aren't [insert term] because only humans are."
 
I don't have to be telepathic to say you can't do the impossible.
Well, the last time I checked, I didn't seem to be able to do the impossible.

The information just isn't there.
What information are you talking about?

What you can do any fool could do is use comparative neuroanatomy and a taxonomy chart to BS a post-hoc evolutionary hypothesis for everything.
I suppose I could make something like this and claim that it was a breakthrough in transportation. But, that wouldn't take me over two years to build nor would anyone believe it. I haven't spent all this time working on a post-hoc chart.

This is just special pleading. "Cockroaches aren't [insert term] because only humans are."
Yes, that argument would be. But that isn't my argument and never has been. If you are going to make up both sides of the argument then you can actually do that without even posting--perhaps using a hand puppet.

My arguments are nothing like this. I know where consciousness came from and it was mainly accidental in terms of evolutionary development. Secondly, there is no big gulf separating humans from other cognitive animals.
 
Well, the last time I checked, I didn't seem to be able to do the impossible.


What information are you talking about?


I suppose I could make something like this and claim that it was a breakthrough in transportation. But, that wouldn't take me over two years to build nor would anyone believe it. I haven't spent all this time working on a post-hoc chart.

Yes, that argument would be. But that isn't my argument and never has been. If you are going to make up both sides of the argument then you can actually do that without even posting--perhaps using a hand puppet.

My arguments are nothing like this. I know where consciousness came from and it was mainly accidental in terms of evolutionary development. Secondly, there is no big gulf separating humans from other cognitive animals.
Riiight, back to the seven veils dance, where you claim to have all this worked out but never actually present it, despite that being much more interesting than using the thread to gripe about AI learning, Turing machines, and cockroaches.
 
I've been working on formalizing knowledge theory. One of the issues I've ran into is AI learning systems. This involves trying to distinguish between cognitive learning and AI learning. However, I haven't been able to find an AI system that can learn. Either I'm just not looking hard enough or the AI definition is very different from what I consider learning to be.

You haven’t looked hard enough. Scientists have been working a long time with neural nets that by definition learn in a general way. It may not be cognition, but it is learning.

You should start by working out why a neural net program is NOT cognition. Unless we understand how your concept of learning is different from an advanced neural net program, we have no way to understand your 'theory'.

http://uhaweb.hartford.edu/compsci/neural-networks-history.html
‘The earliest work in neural computing goes back to the 1940's when McCulloch and Pitts introduced the first neural network computing model. In the 1950's, Rosenblatt's work resulted in a two-layer network, the perceptron, which was capable of learning certain classifications by adjusting connection weights. Although the perceptron was successful in classifying certain patterns, it had a number of limitations. The perceptron was not able to solve the classic XOR (exclusive or) problem. Such limitations led to the decline of the field of neural networks. However, the perceptron had laid foundations for later work in neural computing.’

https://www.coursera.org/course/neuralnets
‘Neural networks use learning algorithms that are inspired by our understanding of how the brain learns, but they are evaluated by how well they work for practical applications such as speech recognition, object recognition, image retrieval and the ability to recommend products that a user will like. As computers become more powerful, Neural Networks are gradually taking over from simpler Machine Learning methods. They are already at the heart of a new generation of speech recognition devices and they are beginning to outperform earlier systems for recognizing objects in images. The course will explain the new learning procedures that are responsible for these advances, including effective new proceduresr for learning multiple layers of non-linear features, and give you the skills and understanding required to apply these procedures in many other domains.’



Here is an AI program that learned chess.
https://www.technologyreview.com/s/...ss-in-72-hours-plays-at-international-master/
‘It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. *Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.’

https://en.wikipedia.org/wiki/Deep_learning
‘Deep learning (deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations.[1][2][3][4][5][6]
Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc. Some representations make it easier to learn tasks (e.g., face recognition or facial expression recognition[7]) from examples. One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.’

So there are computer programs that in a commonly accepted sense ‘learn’. I agree that they don’t think precisely like human beings. They couldn’t pass a Turing test, for example. However, there are animals that are thought to be conscious that can’t pass a Turing test.

You didn’t make it clear if there are any nonhuman animals that learn but do not cognate. Does a dolphin or gorilla learn by cognition? A shark? A cockroach? And about those nematodes that learn. Are you absolutely sure with no doubt at all that the nematodes don’t have a very weak form of cognition?!


Examples have just been provided of AI programs that learn in a general way. You have also been provided examples of animals that learn in a general way. What is lacking is a reproducible means of testing whether a particular case of ‘learning’ is different from ‘cognating’.

Your mission, should you chose to accept it, is to quantify how these AI learning programs ARE NOT ‘cognition’.

Do you cognate this :confused:
 
Riiight, back to the seven veils dance, where you claim to have all this worked out but never actually present it, despite that being much more interesting than using the thread to gripe about AI learning, Turing machines, and cockroaches.
To me, this is obvious, but I guess it isn't to everyone.

Assertion: Nothing can be discussed that isn't proven.

Theory not proven -> Not discussed here.
Theory proven -> Not discussed here.

Do you notice that in both cases, it is not discussed here? So, basically you have an argument for forum shrinkage. Please explain how you would get around your own contradiction.
 
I don't have to be telepathic to say you can't do the impossible. The information just isn't there.

What you can do is use comparative neuroanatomy and a taxonomy chart to BS a post-hoc evolutionary hypothesis for everything. But that will be wrong.

This is just special pleading. "Cockroaches aren't [insert term] because only humans are."


…so refreshingly anti-stupid….but Barehl has that elusive quality known as ‘chutzpah’. I think it should be encouraged.

My arguments are nothing like this. I know where consciousness came from and it was mainly accidental in terms of evolutionary development.


One tiny sentence…three earth-shattering claims:

- You are claiming you know what consciousness is.
- You are claiming you know where it came from.
- You are claiming that, evolutionarily speaking, it is accidental.

As much as I admire your confidence…I expect that you may find these statements receive a somewhat skeptical reception. But then again…this is a skeptics forum!

Secondly, there is no big gulf separating humans from other cognitive animals.


…and there is this claim. Wow! This claim must be predicated on an empirical representation of ‘cognition’ as well as the capacity to explicitly and definitively adjudicate the condition in other 'creatures'!

...just curious...but I note in the OP you seem to suggest that you have not finalized your work on the formalization of knowledge theory. Would not an explicit representation of cognition require such a resolution? Meaning...no definitive knowledge theory...no empirical understanding of cognition.

If you have, in fact, achieved any one of these epistemological milestones then you are certainly deserving of respect. But this is a skeptics forum…so don’t be surprised if you are inundated with demands for evidence.

To me, this is obvious, but I guess it isn't to everyone.

Assertion: Nothing can be discussed that isn't proven.

Theory not proven -> Not discussed here.
Theory proven -> Not discussed here.

Do you notice that in both cases, it is not discussed here? So, basically you have an argument for forum shrinkage. Please explain how you would get around your own contradiction.


I don’t think anyone has any issues with discussing whatever theories you may wish to discuss (personally I find all this stuff to be quite fascinating). I think what folks have issues with is that you keep tossing in these rather exorbitant claims (and they are exorbitant…by any standards [last time I checked nobody even knew what consciousness was let alone where it came from])…and then kind of not backing them up.
 
Last edited:
To me, this is obvious, but I guess it isn't to everyone.

Assertion: Nothing can be discussed that isn't proven.

Theory not proven -> Not discussed here.
Theory proven -> Not discussed here.

Do you notice that in both cases, it is not discussed here? So, basically you have an argument for forum shrinkage. Please explain how you would get around your own contradiction.
And now it's delusions of grandeur. This forum will not rise or fall based solely on our contributions. If you have something to say, say it.
 
Without a scientific testable definition of what learning is, and what cognitive learning is, the discussion will go nowhere. So far the only definition we seem to have is "I know it when I see it".

Most of the things that we commonly use with the word "learn" are things that machine learning systems can learn. Walking, riding a bike, flying, playing guitar, etc.
 
To me, this is obvious, but I guess it isn't to everyone.

Assertion: Nothing can be discussed that isn't proven.

Theory not proven -> Not discussed here.
Theory proven -> Not discussed here.

Do you notice that in both cases, it is not discussed here? So, basically you have an argument for forum shrinkage. Please explain how you would get around your own contradiction.

Barehl OP #1
'I've been working on formalizing knowledge theory. One of the issues I've ran into is AI learning systems. This involves trying to distinguish between cognitive learning and AI learning. However, I haven't been able to find an AI system that can learn.'

Formalize means to 'make more exact'. 'Obvious' and 'exact' are almost mutually exclusive. There are no 'obvious' concepts that are 'exact'. If you know any quantity that is obvious AND exact, please let us know.

The replies are giving you suggestions on how to make it more exact. They are pointing out the blurry boundary between cognation and learning. You are the one who proposes to make the boundary sharper and clearer.

So the topic of discussion is how to make the theory of knowledge more exact. Such a topic wouldn't be interesting unless the theory of knowledge and cognition wasn't clear to begin with. So the discussion was how the thoery of knowledge is unclear.

You aren't being discouraged by the discussion. The intent is to encourage.

You said that you had NO examples of an AI learning systems. Replies of presented examples of AI learning systems and learning systems that are like AI systems.

You did not ask for an example of cognitive learning. Supposedly you already have one that is not exact. You asked specifically for an AI system that learned without human coaching. You have been provided with several examples: neural nets and invertebrate animals.

You are developing a way to distinguish between cognition and learning. You seem to think it is obvious that humans cognate. You have been given quite a few examples of AI systems that learn without direct human coaching. So tell us how to distinguish between the two.

Again, formalizing a field shouldn't depend on 'obvious'. The formalization should start with some concepts that are 'exact'.
 

Back
Top Bottom