• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Hammering out cognitive theory

I don't think cockroaches need a human coach. However, some human scientists have coaching cockroaches. You may want to call them Coacharoaches.

See? I've always said that with these budget cuts in education they'd try to sell completely insane educational strategies with nothing but buzzwords, like "Coacharoaches"! Although I suppose it is possible that the quote simply should've read "have been coaching cockroaches" instead.
 
One tiny sentence…three earth-shattering claims:

- You are claiming you know what consciousness is.
- You are claiming you know where it came from.
- You are claiming that, evolutionarily speaking, it is accidental.
There's nothing unusual about these claims. I couldn't make any progress when I analyzed cognition from the point of view of philosophy, computational theory, or neural networks. I made progress mostly when I looked at it from the point of view of evolutionary theory. What I'm saying is that a system that originated for one purpose was adapted to another. That happens quite often in evolution. My assertion is that the same system still exists in humans and is used for two different functions--but not both at the same time. That is one of the reasons why I get amused when new-agers make claims of untapped potential.

…and there is this claim. Wow! This claim must be predicated on an empirical representation of ‘cognition’ as well as the capacity to explicitly and definitively adjudicate the condition in other 'creatures'!
This system is obviously more complex in humans than in other animals but it is still the same system that probably evolved in non-schooling fish half a billion years ago. I assume you are referring mainly to the last adaptation that separates humans from chimpanzees and Neanderthals. That's a different adaptation that is usually harmful rather than helpful.

...just curious...but I note in the OP you seem to suggest that you have not finalized your work on the formalization of knowledge theory. Would not an explicit representation of cognition require such a resolution? Meaning...no definitive knowledge theory...no empirical understanding of cognition.
Not exactly. I came up with the conscious cycle last year and it seemed to fit with existing evidence. It has continued to match the evidence. However, people like Russ have been highly critical because it lacked an explanatory theory that would let you compare it to things like computational theory. That's a reasonable criticism. If you can't formalize it then how would you know that it was different from computational theory and not just some specialized subset? However, I couldn't fit it with existing theory, so I created knowledge theory. Adding knowledge theory has expanded to a more robust General Cognitive Systems theory and shown where the gaps are.

If you have, in fact, achieved any one of these epistemological milestones then you are certainly deserving of respect. But this is a skeptics forum…so don’t be surprised if you are inundated with demands for evidence.
I'm not sure that I would describe it as epistemology since that is a branch of philosophy and I'm working on formal theory. I might describe knowledge theory as in between information theory and the philosophical concept of truth. I have no idea what you are trying to say about respect. If you mean demonstrated success (evidence) then that is pretty much irrelevant to this forum. Or, if you mean predicting the future (confidence) based on demonstrated success then that too is irrelevant. Demands for evidence are also irrelevant. When I am finished I'll publish it. It would be silly to try to prove it here.

I don’t think anyone has any issues with discussing whatever theories you may wish to discuss (personally I find all this stuff to be quite fascinating). I think what folks have issues with is that you keep tossing in these rather exorbitant claims (and they are exorbitant…by any standards [last time I checked nobody even knew what consciousness was let alone where it came from])…and then kind of not backing them up.
All right, let's say that the claims are exorbitant. So, let's run through the possible motives that I could have:

1.) I am delusional.
This would not be that unlikely given that people with schizophrenia often develop obsessions with particular subjects and draw unwarranted conclusions based on unrelated data. For example, one man whom I talked to who was schizophrenic thought that it was significant if two words began with the same letter. As far as I can tell, I'm not delusional in this sense. For example, I managed to take the Ford temperature control unit apart on my mother's car and replace the O-rings. If you take this to a dealer, it's an $800 repair because a new unit costs $500 by itself. The O-rings cost me 90 cents apiece. I also replaced the heat control actuator on my minivan. This is a $600 repair. Instead, a new actuator cost me about $30. The old one had stripped teeth so it would only click instead of opening or closing the duct that lets more or less air pass through the heater core. I also replaced the venture gasket on the water softener and the bypass tube on my engine that was leaking antifreeze. The bypass tube is under the intake manifold. These types of valuations concerning money and the practicality of repair are generally beyond the ability of someone with schizophrenia.

2.) I'm attention seeking.
This again would not be that unlikely. There is the famous case of Stephen Glass who invented about half of the stories he wrote for the New Republic. People have fabricated being relatives of famous people such as Clark Rockefeller, have fabricated illnesses, fabricated histories, fabricated abilities, and have certainly fabricated technologies. Except, the profile doesn't seem to fit. If I were fabricating the idea that I could make a thinking machine then the most likely route would be to fake some kind of Turing Test where I demonstrated that an AI could seemingly answer complex questions. In other words, I should be big on demonstration and the theory should match what people already expect. So, if I wanted it to sound plausible I suppose I would claim that I had come up with a breakthrough procedure in deep learning rather than saying that I didn't think that general AI was possible. And why would I talk about knowledge theory when no one has any idea what that involves? Also, if I really wanted attention then where is my website, facebook page, or youtube channel promoting me?

3.) It's a case of Dunning Kruger.
This is also not that unlikely. There's no doubt that Sam Harris believes what he said about free will. I'm certain that Susan Blackmore believes what she says about parapsychology and dualism. I'm certain that Tononi and Barrett believed what they said about Integrated Information Theory. I'm certain that Penrose and Hameroff believed what they said about orchestrated objective reduction. I'm certain that Baars believes what he said about Global Workspace Theory and I'm certain that Dennett believes what he said about Multiple Drafts. These are all examples of making an intuitive conclusion rather than working through the details. I suppose if these people were neural nets then we would say that they found a local minimum rather than a global minimum. There are some differences though. When people make an intuitive conclusion it tends to be fairly simple, something that is easy to explain. GCS is more than I could fit in an entire journal, much less one article. Dunning Kruger also works by information avoidance. People cling to what appears to be a simple solution rather than seeking additional information or opposing ideas. However, if you don't seek additional information then you have no idea what fits and what doesn't, so it becomes impossible to evaluate the quality of your theory based on the evidence. And, if you don't look at opposing ideas then how would you have any idea what the scope of the problem was? Minimizing scope is a good way to arrive at a local minimum. To the best of my knowledge, I'm not avoiding either new information or opposing ideas.

4.) I'm on the right track.
 
And now it's delusions of grandeur. This forum will not rise or fall based solely on our contributions. If you have something to say, say it.
You've completely skipped the question. Based on your scenario, it would not get discussed here. How would you get around that?
 
You've completely skipped the question. Based on your scenario, it would not get discussed here. How would you get around that?
By pointing out that you are attacking a straw man. I honestly haven't the foggiest idea what question or scenario you're talking about, nor what "it" is - you appear to have asked and answered completely internally without the rest of us in the loop.

You keep saying you have these grandiose theories, which are simultaneously so robust they don't need our input on them, yet so fragile that they can't be exposed to outside scrutiny. So you dance around asking ancillary questions like "where are the AIs that learn?" knowing full well you'll never accept any answer as satisfactory, and when challenged directly to state your theories, instead change the subject with tangential horse crap like the above.

barehl said:
It's a case of Dunning Kruger.
...
To the best of my knowledge, I'm not avoiding either new information or opposing ideas.
Yeah, that's how Dunning-Kruger works. It'll always be "to the best of your knowledge."
 
I honestly haven't the foggiest idea what question or scenario you're talking about, nor what "it" is
You have a scenario where you suggest that I have to have proof of my ideas before they are discussed here. I pointed out that by this scenario, my ideas would never be discussed here either before or after they are proven. So, using common sense, your scenario excludes discussion. I asked you to explain how you would get around this but you've ignored the question.

You keep saying you have these grandiose theories
Actually, I don't keep saying that. I talk about what ideas I have. If you think they are grandiose, that's fine but that isn't based on anything more than an assumption on your part.

which are simultaneously so robust they don't need our input on them
Wouldn't it be nice if something in that sentence bore some resemblance of reality? I've never said anything like that. I have been working on these ideas for over two years. If this was easy then I would have finished it long ago. Of course, there are those here who apparently subscribe to the jazz theory of problem solving and don't think that two years is sufficient suffering. I work on it and I make progress. This doesn't mean that I have or can explain everything related to the subject. I'll keep working on it until I can't make progress.

yet so fragile that they can't be exposed to outside scrutiny.
Now you are just being dishonest. I haven't published the theory yet. If the theory is right then I want credit for it. That has nothing to do with it being fragile. This insinuation of yours is particularly dishonest when you know as well as I do that no theory of this kind has ever stood up to scrutiny. If mine did then it would be the first.

when challenged directly to state your theories, instead change the subject
Maybe you are just not getting this point. Let me see if I can explain in a simpler fashion. I am not going to prove my theory here. I am not going to explain my ideas in detail here. Suggesting that I do this is unreasonable and already know that.

Yeah, that's how Dunning-Kruger works. It'll always be "to the best of your knowledge."
Actually, that is how dishonesty works. Notice that along with your accusation, you didn't include any instances where I've actually done it.
 
Without a scientific testable definition of what learning is, and what cognitive learning is, the discussion will go nowhere. So far the only definition we seem to have is "I know it when I see it".

Most of the things that we commonly use with the word "learn" are things that machine learning systems can learn. Walking, riding a bike, flying, playing guitar, etc.

During the summer the grass had to be mowed. Operating a riding mower doesn't seem to require a lot of brain power so I would think about the theory while I was doing that.

A few days ago it was warm enough for me to ride my bike. The course I measured out is 4.4 miles. I kept thinking about my theory and trying to fit the ideas with a real system. I ended up doing three laps. I thought about it some more and decided that I needed to map things out better. But, I couldn't seem to do that using either text or a spreadsheet. It seemed that a flowchart might be better. I didn't have a flow chart program handy so I downloaded Apache Open Office so I could use Draw.

My ideas have now collided head-on with reality. Unless you do something stupid like putting in a cloud balloon with text that says "Mysterious Process", you have to be honest with yourself. Flow charting requires both formality and organization. If I can't do the flow charts then I don't have sufficient explanation to build a real system. I'll have to see if I can work through this.
 
This involves trying to distinguish between cognitive learning and AI learning. However, I haven't been able to find an AI system that can learn.
Please define your terms, barehl.
What do you mean by "cognitive learning"?
What do you mean by "AI learning"?

For that matter you may have to give your definition of "learn" :D.
Posters have pointed out that it is easy to find AI systems that learn according to the usual definition of "gain or acquire knowledge of or skill in (something) by study, experience, or being taught".
 
My ideas have now collided head-on with reality. Unless you do something stupid like putting in a cloud balloon with text that says "Mysterious Process", you have to be honest with yourself. Flow charting requires both formality and organization. If I can't do the flow charts then I don't have sufficient explanation to build a real system. I'll have to see if I can work through this.

I really think you'd be limiting yourself unnecessarily with flowcharts. Many complex algorithms flowchart rather poorly, unless you are just trying to show the high level. Of course, since I have no idea how cognition operates, nor what direction your theories are going in, I can't conceive of what a description of cognition would even begin to look like.

My personal thought is that while trying to pin-down cognition from a top down direction is a laudable goal, and may be possible, the bottom up approach will beat it to the punch. When natural selection selected for creatures with cognitive abilities, it was not selecting for cognition, only the results of cognition. This indicates that cognition is not some black and white condition, but a wide spectrum.

Similarly, computer scientists experimenting with learning and decision systems are continuously making incremental improvements. I think in order to make their software behave in the desired ways, incrementally, more and more aspects of cognition will be required, just as in natural systems. There will not be one computer scientist who gets to claim, "hah, I have created a cognitive system".

Unfortunately for philosophers, part of what computer scientists are having to do is let genetic processes drive much of how complex learning systems are made. As these processes get more and more complex, it will be harder and harder to pin down exactly how a system works.
 
I really think you'd be limiting yourself unnecessarily with flowcharts. Many complex algorithms flowchart rather poorly, unless you are just trying to show the high level. Of course, since I have no idea how cognition operates, nor what direction your theories are going in, I can't conceive of what a description of cognition would even begin to look like.

My personal thought is that while trying to pin-down cognition from a top down direction is a laudable goal, and may be possible, the bottom up approach will beat it to the punch. When natural selection selected for creatures with cognitive abilities, it was not selecting for cognition, only the results of cognition. This indicates that cognition is not some black and white condition, but a wide spectrum.

Similarly, computer scientists experimenting with learning and decision systems are continuously making incremental improvements. I think in order to make their software behave in the desired ways, incrementally, more and more aspects of cognition will be required, just as in natural systems. There will not be one computer scientist who gets to claim, "hah, I have created a cognitive system".

Unfortunately for philosophers, part of what computer scientists are having to do is let genetic processes drive much of how complex learning systems are made. As these processes get more and more complex, it will be harder and harder to pin down exactly how a system works.


…meaning what? That at some point it will be impossible to determine how a specific result was generated? Maybe that would be the point at which cognition becomes consciousness. Sounds kinda mystical…doubtless Kurzweil is keeping track of such things.

BTW…why do you say ‘unfortunately for philosophers’? What are the philosophical implications?
 
…meaning what? That at some point it will be impossible to determine how a specific result was generated? Maybe that would be the point at which cognition becomes consciousness. Sounds kinda mystical…doubtless Kurzweil is keeping track of such things.

BTW…why do you say ‘unfortunately for philosophers’? What are the philosophical implications?

Or maybe fortunately. Suppose it'd sell a lot more books and pack a lot more conferences if we have artificial intelligences but don't fully understand how they work, but can experiment on them ad naseum.
 
Without a scientific testable definition of what learning is, and what cognitive learning is, the discussion will go nowhere. So far the only definition we seem to have is "I know it when I see it".

Most of the things that we commonly use with the word "learn" are things that machine learning systems can learn. Walking, riding a bike, flying, playing guitar, etc.

Please define your terms, barehl.
What do you mean by "cognitive learning"?
What do you mean by "AI learning"?

For that matter you may have to give your definition of "learn" :D.
Posters have pointed out that it is easy to find AI systems that learn according to the usual definition of "gain or acquire knowledge of or skill in (something) by study, experience, or being taught".
Yes, the fuzziness of terms like "learn", "belief", "sense", "know", "think", or "intelligence" is always a problem. Any gain in information can be said to be an example of learning. I would probably call this something like igLearning. One of the simplest examples I can think of would be the self-adjustment of brake shoes on drum brakes.

Learning a physical skill is a common usage in English such as "learn to ride a bike", "learn to juggle", "learn to type" or "learn to play the piano". But, studies of people who because of brain damage can't form new memories is decades old. The first person studied was able to develop new physical skills such as tracing a shape in a mirror even though he couldn't remember ever doing that task before. So, here I suppose we have a distinction between learning a physical skill and memory of an event. I would probably have to denote it as something distinctive such as physLearning.

Neither of these however is what is typically meant by learning. The simplest cases are generally how we teach children. We take this for granted without realizing how sophisticated the difference actually is. I haven't yet come across an example of learning with a neural net that wasn't igLearning. Children don't learn this way. Children actually have cognitive learning. In a formal definition I would have to use knowledge theory and distinguish between information gain and knowledge gain. But, since knowledge theory is undefined here, that isn't much help. Without using knowledge theory I would end up defining it as something like the creation of an abstract category that is generally applicable. Or perhaps there is a practical distinction in the way that neural nets work.
 
Yes, the fuzziness of terms like "learn", "belief", "sense", "know", "think", or "intelligence" is always a problem. Any gain in information can be said to be an example of learning. I would probably call this something like igLearning. One of the simplest examples I can think of would be the self-adjustment of brake shoes on drum brakes.

Learning a physical skill is a common usage in English such as "learn to ride a bike", "learn to juggle", "learn to type" or "learn to play the piano". But, studies of people who because of brain damage can't form new memories is decades old. The first person studied was able to develop new physical skills such as tracing a shape in a mirror even though he couldn't remember ever doing that task before. So, here I suppose we have a distinction between learning a physical skill and memory of an event. I would probably have to denote it as something distinctive such as physLearning.

Neither of these however is what is typically meant by learning. The simplest cases are generally how we teach children. We take this for granted without realizing how sophisticated the difference actually is. I haven't yet come across an example of learning with a neural net that wasn't igLearning. Children don't learn this way. Children actually have cognitive learning. In a formal definition I would have to use knowledge theory and distinguish between information gain and knowledge gain. But, since knowledge theory is undefined here, that isn't much help. Without using knowledge theory I would end up defining it as something like the creation of an abstract category that is generally applicable. Or perhaps there is a practical distinction in the way that neural nets work.

So a child learning to walk would be physLearning, correct?
 
I really think you'd be limiting yourself unnecessarily with flowcharts.
That is possible; I'll have to see how it goes. I have the flowchart for a Non-Cognitive System and the one above that. It would probably take three or four more flow charts to get to a General Cognitive System.

Many complex algorithms flowchart rather poorly, unless you are just trying to show the high level. Of course, since I have no idea how cognition operates, nor what direction your theories are going in, I can't conceive of what a description of cognition would even begin to look like.
It's not an algorithm. It shows the information flows and control elements between subsystems. This should be fine as long as I can define the information types and the breakdown of each subsystem.

My personal thought is that while trying to pin-down cognition from a top down direction is a laudable goal, and may be possible, the bottom up approach will beat it to the punch.
That seems very unlikely. The bottom up approach seems to have been used because of a lack of any other approach.

When natural selection selected for creatures with cognitive abilities, it was not selecting for cognition, only the results of cognition. This indicates that cognition is not some black and white condition, but a wide spectrum.
I have to agree with this. Getting smarter is a disadvantage unless it allows you to gain energy from your environment faster. In other words, the gain has to be greater than the loss in terms of weight and respiration from having more brain. It actually gets worse as you keep moving up. You run into problems with speed where being smarter makes you slower which negates any gain in the real world (but would still be considered gain to a computational theorist). This is why I said earlier that cognition was mostly accidental; it was a way of overcoming speed limitations which had no direct solution. There are additional problems like a loss of behavioral correspondence. And there are others. Homo Heidelbergensis seems about as far as you can go without running into an identity paradox. You'll note that of the three lineages: Neanderthal, Denisova, and Sapiens, only Sapiens got past this problem. And then you have the fact that humans are civine whereas Neanderthals were not. That also matches since civinity is typically detrimental. These are the types of issues that people gloss over when they start talking about singularity.

Similarly, computer scientists experimenting with learning and decision systems are continuously making incremental improvements. I think in order to make their software behave in the desired ways, incrementally, more and more aspects of cognition will be required, just as in natural systems.
The facts kind of argue against this. For example, the brain project which seems to be following Kurzweil's erroneous back-of-the-envelope estimates.

There will not be one computer scientist who gets to claim, "hah, I have created a cognitive system".
If we were talking about Henry Markram I would probably agree with you. But, I'm sure you are including me in that sentence. You could be right. However, I don't remember seeing anyone else talking specifics.

The general problems that would concern building such a system have to do with data size, bandwidth issues, and information flow organization. You need 5 bits to match synaptic levels but this isn't evenly divisible, so you would round up to 8 bits. I'm pretty sure that you can get by with 32 bits of selection; but, if you did have to go higher, you would have to round up to 64 bits. Memory bandwidth using DDR4-2400 would require 50,000 channels. These are 8 bytes wide which doesn't really help for 8 bits but probably means that 64 bits for selection wouldn't be slower than 32. If you can do three instructions per clock cycle then the CPU logic should be able to keep up with one channel of memory. That's about 3 GB of memory so volume isn't a problem. However, the 32 bits is indirect which means you can't use burst transfer. So, with a CL of, say, 18 and two transfers per clock our bandwidth is 1/36th as fast. This increases our channel demand from 50,000 to 1.8 million. But, surely that is impossible since that would mean no increase in memory speed since DDR came out in 1996 (20 years ago). It would also imply that one memory channel could support no more than 100 MHz of processor speed. Such are the problems with trying to build a real system--and why none of the existing projects seems to have any hope of a solution. There are ways to get around these problems but not by throwing processor power at them--which is the naïve solution that everyone seems to be clinging to.

Unfortunately for philosophers, part of what computer scientists are having to do is let genetic processes drive much of how complex learning systems are made. As these processes get more and more complex, it will be harder and harder to pin down exactly how a system works.
I disagree. A lack of understanding will almost certainly lead to failure. I'm not quite sure how this relates to philosophers though.
 
Last edited:
So a child learning to walk would be physLearning, correct?
It depends on what you mean. The skill of moving your feet and legs to transport your body from place to place without falling over is in the cerebellum which is the phys part. The concept of walking is in the cerebrum and this is not phys.
 
Or maybe fortunately. Suppose it'd sell a lot more books and pack a lot more conferences if we have artificial intelligences but don't fully understand how they work, but can experiment on them ad naseum.
I guess I'm ruining the party then because I have no intention of making a cognitive system without knowing how it works.
 
You have a scenario where you suggest that I have to have proof of my ideas before they are discussed here.
Why yes, that would be a foolish thing to say, wouldn't it?

Good thing I've never said that.

What I've been saying, and I'll reiterate here, is that if you want to discuss an idea, discuss the idea. What you don't do is say "I have an idea," and then only talk about tangential aspects and conclusions of the idea without ever detailing what the idea is.

Other people who do this sort of thing are usually cranks, so it's a bit of a warning sign.

If the theory is right then I want credit for it.

"They'll steal my idea" is another warning sign. It does happen from time to time, but nowhere near as often as it's invoked. If you want to ensure you get credit for your ideas, you should be telling its details to as many people as possible so everyone knows it's yours when they see it elsewhere.

Actually, that is how dishonesty works. Notice that along with your accusation, you didn't include any instances where I've actually done it.
Okay, here's one: go back to your last thread and read every single sentence W.D. Clinger said to you. Judging from the quality of his replies he's forgotten more than you currently know. If you think he's wrong, you're wrong, and ignorant, and the mental state keeping you from seeing that for yourself is entirely Dunning-Kruger.
 
That is possible; I'll have to see how it goes. I have the flowchart for a Non-Cognitive System and the one above that. It would probably take three or four more flow charts to get to a General Cognitive System.
I have to agree with this. Getting smarter is a disadvantage unless it allows you to gain energy from your environment faster. In other words, the gain has to be greater than the loss in terms of weight and respiration from having more brain. It actually gets worse as you keep moving up. You run into problems with speed where being smarter makes you slower which negates any gain in the real world (but would still be considered gain to a computational theorist). This is why I said earlier that cognition was mostly accidental; it was a way of overcoming speed limitations which had no direct solution. There are additional problems like a loss of behavioral correspondence. And there are others. Homo Heidelbergensis seems about as far as you can go without running into an identity paradox. You'll note that of the three lineages: Neanderthal, Denisova, and Sapiens, only Sapiens got past this problem. And then you have the fact that humans are civine whereas Neanderthals were not. That also matches since civinity is typically detrimental. These are the types of issues that people gloss over when they start talking about singularity.

You would do better if you wouldn't gloss over what other people have said concerning evolution.

I suggest that you could incorporate the research of real biologists rather than completely ‘intuit’ your way to fame and fortune. Some of the things that you have said remind me of things said in the book, ‘Ontogeny and Phylogeny’ by Stephen Gould.

Perhaps you are saying that cognition is one of many K-strategies as opposed to and ‘r-strategy’. K-strategies are trends in evolution in which an organism’s resources are used more for prolonging the period of reproduction than increasing the rate of reproduction.

Maybe you should tell us how cognition is different from any other adaptation. The parts of your model that you have told us so far don’t distinguish between different K-strategies. So far as I can tell, the evolution of ‘cognition’ is no different from the evolution of ‘hypertrophy’, which is excessive growth of the adult.

You seem to be restating the concept of K-strategy rather than formalizing cognition theory. As you develop your ‘formal cognition theory’, maybe you should focus a bit on cognition.

http://www.bio.miami.edu/tom/courses/bil160/bil160goods/16_rKselection.html
‘Organisms that live in stable environments tend to make few, "expensive" offspring. Organisms that live in unstable environments tend to make many, "cheap" offspring.’

Cognition is expensive, not cheap. Size is expensive, not cheap. Armor is expensive, not cheap. Social structure is expensive, not cheap. These are all K-strategies. Unawareness is cheap, not expensive. Small size is cheap, not expensive. Soft skin is cheap not expensive. These are all r-strategies.

https://en.wikipedia.org/wiki/R/K_selection_theory
‘In ecology, r/K selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus upon either increased quantity of offspring at the expense of individual parental investment, r-strategists, or reduced quantity of offspring with a corresponding increased parental investment, K-strategists, varies widely, seemingly to promote success in particular environments.
The terminology of r/K-selection was coined by the ecologists Robert MacArthur and E. O. Wilson[1] based on their work on island biogeography,[2] although the concept of the evolution of life history strategies has a longer history.[3]
The theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies.[4][5] A life-history paradigm has replaced the r/K selection paradigm but continues to incorporate many of its important themes.[6]’

There are a lot more books on this topic now. However, the following book started me on the topic of evodevo. I consider it my major hobby right now! :)


http://www.amazon.com/Ontogeny-Phyl...p/0674639413/ref=cm_cr_pr_product_top?ie=UTF8
"In this, the first major book on the subject in fifty years, Stephen Gould documents the history of the idea of recapitulation from its first appearance among the pre-Socratics to its fall in the early twentieth century.’
 

Back
Top Bottom