• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Does your subconscious solve problems?

For some notion to gain the status of a scientific theory there must be some observation or test that has the potential of demonstrating it is false. If all such observations or tests -- that have the potential to falsify some notion -- fail to falsify that notion, it maintains the status of a valid scientific theory. If there is no conceivable test or observation that could possibly demonstrate that some notion is false, it cannot be considered to be a candidate to become a scientific theory. Unless, someone can demonstrate otherwise, "subconscious" seems to be such a notion -- one that that cannot be falsified. Consequently, it is mere woo.

Examples:

Scientific theory: Einstein's theory of general relativity. It could be falsified tomorrow if someone were to measure something moving in excess of c -- the speed of light.

Woo: There is a parallel universe to the one we live in that is not detectable by any possible experiment or observation. By its very description it cannot be falsified.

More woo: There is a "subconscious."
 
For some notion to gain the status of a scientific theory there must be some observation or test that has the potential of demonstrating it is false.
Neither of you seem to be grasping the question.

Forget unconscious. Forget subconscious.

Is the notion of a cup woo? If so, tell me why I should discard the notion of a cup.

If not, tell me how to falsify the notion of a cup.
 
Neither of you seem to be grasping the question.

Forget unconscious. Forget subconscious.

Is the notion of a cup woo? If so, tell me why I should discard the notion of a cup.

If not, tell me how to falsify the notion of a cup.

Actually it's quite simple. Just show us that all the objects we call cups are not cups -- i. e., demonstrate that they are really trees or earthworms or whatever. Since no one has yet done that, we are justified to believe cups exist.
 
Actually it's quite simple. Just show us that all the objects we call cups are not cups -- i. e., demonstrate that they are really trees or earthworms or whatever. Since no one has yet done that, we are justified to believe cups exist.
I still don't understand. Can you propose a test we can perform on an object I think is a cup that will determine if it is in fact a cup?

And also, why would showing that all of the things I believe is a cup are not in fact cups falsify the concept of a cup? Couldn't it just mean that the things I believe are cups simply aren't cups? Wouldn't this instead simply be falsifying the claim that the things I believe are cups are, in fact, cups?

For example, what if we played with the notion of "indivisible unit of matter", and there were things I thought were indivisible units of matter (let's call those "atoms")? Now suppose it were demonstrated that everything I thought was an indivisible unit of matter was not, in fact, an indivisible unit of matter.

Did I falsify the notion of "indivisible unit of matter"?
 
Last edited:
I still don't understand. Can you propose a test we can perform on an object I think is a cup that will determine if it is in fact a cup?

And also, why would showing that all of the things I believe is a cup are not in fact cups falsify the concept of a cup? Couldn't it just mean that the things I believe are cups simply aren't cups? Wouldn't this instead simply be falsifying the claim that the things I believe are cups are, in fact, cups?

For example, what if we played with the notion of "indivisible unit of matter", and there were things I thought were indivisible units of matter (let's call those "atoms")? Now suppose it were demonstrated that everything I thought was an indivisible unit of matter was not, in fact, an indivisible unit of matter.

Did I falsify the notion of "indivisible unit of matter"?


I suggest you read some sources about Popper's philosophy of science.
 
I suggest you read some sources about Popper's philosophy of science.
Everything I have come across so far suggests, as matched my understanding before getting into this, that falsifiability applies to statements.

You're going to have to recommend more specific sources.
 
Putting the discussion of the existence of a subconscious to the side (I don't really care what the process is called.) I think I'm not the only programmer to ever have dreamt a fix in a tricky bit of code.

I very often have quite clear dreams about coding issues I'm working on and more than once, I have had dreams where I'm in front of my workstation with the piece of code I gave up on before I went home and suddenly see the missing semicolon or realise I need to use a different function or move the variables around. Wake up. Go to work. Apply the fix and it works. I have also had lots of dreams where the fix turns out to be wrong - but it was plausible, indicating my brain was indeed working on the problem while I slept even though it came up with the wrong answer.

The fresh perspective thing also rings true to me. Very often when the code just won't, I find it's better to go home even with an approaching deadline, because at my current state of mind I simply don't have it in me to solve the problem. After doing something enjoyable, cooking dinner and a nice night of sleep, I get in, pull up the piece of code and then see the issue/find a solution. I have actually stopped pulling allnighters and try to manage client/manager expectations instead as I found I performed as well this way and was much less frustrated.

My thing, if I may call it that, has always been to sort out messes. That's what I enjoy the most. When I was a kid I loved to untangle the Christmas lights - a trait my dad appreciated. So while I enjoy writing code from scratch the funnest I know is when someone goes "Find out why this code fails" or "We get unexpected results, find out why."

That is exactly the kind of job where you need to do other stuff while you "background process" the problem. I have the same approach to malfunctioning electronics in my house or other stuff. I have very rarely solved a problem, sitting in front of the problem or thinking intently of the problem. Usually, the solution will pop up while I do something else entirely.

Which is why I don't mind that House does that all the freakin' time. The "Did you say 'Bunny'..?" thing? Totally buy it.
 
Not mysterious -- I would bet good money the network topology of memory ( and most of the brain in general ) is similar to this:

http://en.wikipedia.org/wiki/Hopfield_network

Note that the behavior of the Hopfield net ( and similar associative memory networks ) is *exactly* what you get with human memory.

I haven't read the link yet, but I will.
Meanwhile, what I find mysterious is the initial command. We (whatever that is), command the brain to remember something hidden deep in the files. Its that initial 'command' that I find baffling. its not like hitting a toggle switch.
 
I haven't read the link yet, but I will.
Meanwhile, what I find mysterious is the initial command. We (whatever that is), command the brain to remember something hidden deep in the files. Its that initial 'command' that I find baffling. its not like hitting a toggle switch.
I'm not so sure it's not like hitting a toggle switch. Surely initiating intentional actions and triggering the brain to recall something buried in its files are involved when I hit a toggle switch.
 
Everything I have come across so far suggests, as matched my understanding before getting into this, that falsifiability applies to statements.

You're going to have to recommend more specific sources.

Start here: Popper
There is an extensive bibliography, and list of more links included. In any case this is a derail; if you do want to discuss this further start a new thread.
 
I haven't read the link yet, but I will.
Meanwhile, what I find mysterious is the initial command. We (whatever that is), command the brain to remember something hidden deep in the files. Its that initial 'command' that I find baffling. its not like hitting a toggle switch.

The link explains it all.

You simply provide the network with an initial state near the state you want to recall, and it naturally converges on the closest "remembered" state.

So the "command" is really more of a series of "hints."
 
Nicely hypothesized. Do you have a background in NNs of the computer or brain varieties? Because you certainly sound like you do.:)

I never considered a "coincidence" hypothesis to this. Neat.

I do somewhat. Unfortunately the A.I. used in video games these days, even of the most complex variety, is limited to hierarchical finite state machines. So I don't have a chance to work on ANN's professionally, but it is a hobby of mine ( especially how certain network topologies might be related to our own BNN organization ).

Someday I would like to start up my own studio that will specialize in ANN controlled A.I., we will see how that goes when the time comes.
 
The link explains it all.

You simply provide the network with an initial state near the state you want to recall, and it naturally converges on the closest "remembered" state.

So the "command" is really more of a series of "hints."

Will do. Thanks.
 
Start here: Popper
There is an extensive bibliography, and list of more links included.
How will I know when I find it?
In any case this is a derail; if you do want to discuss this further start a new thread.
It's not too much of a derail. What is at stake is precisely what the implication of Jeff Correy's claim is supposed to be when in post #34 he says that unconsciousness is a non-falsifiable concept. The question is simple--is Jeff intending to refer to some term with statements in it, such as "Freudian theory of unconsciousness is unfalsifiable", in post #34? Or is there such a thing as a falsifiable non-statement?

I suspect semantic sloppiness is afoot, because Correy gave an example of a claim being unfalsifiable to defend the notion that a non-statement term is, and you outright converted a non-statement term into a claim to defend the notion that a non-statement term is unfalsifiable.

So I'm met with skepticism about your approach that "go read more Popper" is the solution, especially since you're simply chasing me after the references in Wikipedia.

What I care more about, and what I think is and should be more relevant to this thread, than whether or not Freudian's theory of unconsciousness is falsifiable, is whether or not the concept of an unconscious mind itself is a useful one.

"Subconscious" I can do without.
 
If you would like to explore whether the concept of a “subconscious” or "unconscious" is falsifiable, then do so (you will have to define “subconscious" and/or "unconscious" first). If you want to pursue the definition and consequences of falsifiability as a scientific principle (as developed by Popper), I suggest you start a new thread.
 
I just wanted to bring out a few potential talking points to get the thread back on its rails.

Incubation is the term used for what we are dealing with.


The preceding article states ("p.62"):

"Perhaps one of the greatest obstacles to research on incubation effects is an adherence to the common assumption that incubation must be the result of unconscious problem solving." (which is basically the motivation behind my posting of the OP)

The articles then goes on to state that fixation on an erroneous detail may lead to intractability of the problem, but given enough incubation time, that this fixation dissolves, leaving the person to solve the problem with an improved mindset. The issue of fixation is connected to functional fixedness where one has an inability to think of "unusual uses for familiar objects."

A BBC four programme ("Battle of the Brains) used the idea of functional fixedness (although I don't think the called it that) where they tried to come up with a variety of tests to measure the intelligence of different people, from different life situations and careers. One measure that they used was just this: how many uses could you find for a random object.
 
If you would like to explore whether the concept of a “subconscious” or "unconscious" is falsifiable, then do so (you will have to define “subconscious" and/or "unconscious" first).
You're missing the point. Suppose I wanted to defend the opposing viewpoint--that "unconscious mind" is falsifiable. What is my burden? To explain a process in a human mind that a human isn't aware of? Why couldn't the claim apply to dog minds or robot minds? "Unconscious" says nothing, so it makes no sense to me to say that "unconscious is falsifiable".

But I'll try to table that for now because we're just running in circles; apparently you're imagining that I'm acknowledging that "unconsciousness is not falsifiable" has a meaning and am disagreeing with it, despite my begging of you to supply a meaning.

You addressed the "cup falsifiability" issue by first converting the incomplete thought "cup" into a claim, such as "what I think is a cup is actually a cup". So since I have no idea what it means to say "unconsciousness is falsifiable" even hypothetically, let me follow this pattern. "What I think is an unconscious mind is really an unconscious mind".

The wiki definition of "unconscious mind" sounds good enough--a process of the mind that a person is not aware of. For something to be called "mind", we can focus on a particular area that interests me--intent. Intentional actions have certain properties that we can test; focusing on the example of drinking from a cup, if this action were intentional, it would involve: (1) Perception of a cup, (2) setting a goal involving the perceptual object--drinking from a cup (which itself involves such things as planning, which suggests perceiving or recalling potential uses of a cup and ways of using it), and (3) carrying out that goal.

Anything that is a goal-seeking behavior involving processing of perceptual capabilities is worthy of being called a process of the mind. Something that a person self-reports to not be aware of I can meaningfully call unconscious. Were I to find a class of apparent goal-seeking behavior that a person self-reports they aware of, I can apply a test to that class of behaviors to determine if it is indeed a goal-seeking behavior. For example, I can influence the perception of the involved object during the action (say, making a small movement of the cup), and observe if the resulting behaviors adjust accordingly such that the goal is still met (arm movement adjusts in such a way to compensate and still pick up the cup). If such behaviors do not adjust to meet the end goal, then I have a problem establishing that they are indeed intentional, because I have problems establishing that they are goal-based as I originally suspected.

That in mind, I have described a "cup" and a way of verifying that the "cup is really a cup", but my reservations here involve an entirely different kind of burden than falsifiability. The reservations involve whether or not the term "unconscious mind" plays a useful role in the description. For example, this is simply one particular description... what other kinds of descriptions can be given using the term? Would I be better off simply calling this "unconscious intent"? Or even better off simply calling it intent that we happen not to be aware of?

(ETA: It's actually even slightly worse than this, because there may be a use for the term "unconscious mind"--a refinement of the concept--that makes sense, that still shouldn't apply in this scenario; for example, were I to actually find a partition of behaviors, where one would qualify as mind in a similar way that I qualified intentional behaviors above, but where a subject could not attend to; thus it's not so much like we're trying to falsify a theory as it is that we're trying to figure out if there exists a "fit" for the term "unconscious mind" into an existing system).
 
Last edited:
yy2bggggs: I agree with Perpetual Student. You need to start a new thread on this (interesting) discussion, but unless you tie into "problem solving", then what you are doing is derailing the thread.
 
More woo: There is a "subconscious."
The use of the terms subconscious and unconscious is a convenience. It is not a thing. It is a condition. Certainly no one can serious argue that there is nothing going on behind the scenes of our normal waking consciousness. Whatever that stuff is that's going on is referred to as the subconscious. Likewise with terms like ego and Id. Neither of these are really things or entities. They are descriptions.
 
yy2bggggs: I agree with Perpetual Student. You need to start a new thread on this (interesting) discussion, but unless you tie into "problem solving", then what you are doing is derailing the thread.

I disagree. If there is no scientifically valid entity as the "unconscious", then the answer to the OP is "no".
 

Back
Top Bottom