• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain this to me.

Patrick

Graduate Poster
Joined
Jul 3, 2004
Messages
1,224
First, an analogy:

Suppose I take a photograph using film. I have prints made, the prints are the final repository of the image. It's there to look at any time.

Now, suppose I just look at something: light waves from the image go through space to my eye. The eye lens focusses the image on my retina. The optic nerve encodes the image into electric pulses and sends them to my brain, but to what? What is the last "thing" that is "looking at" the image? The image terminates somewhere in some visual center of the brain, I know that (analogous to the print), but "who" or what is "looking at" the "print"? See what I mean?

This is driving me up the wall.
 
Patrick said:
First, an analogy:

Suppose I take a photograph using film. I have prints made, the prints are the final repository of the image. It's there to look at any time.

Now, suppose I just look at something: light waves from the image go through space to my eye. The eye lens focusses the image on my retina. The optic nerve encodes the image into electric pulses and sends them to my brain, but to what? What is the last "thing" that is "looking at" the image? The image terminates somewhere in some visual center of the brain, I know that (analogous to the print), but "who" or what is "looking at" the "print"? See what I mean?

This is driving me up the wall.

The electrical impulses are incorporated into the frenzy of electrical activity in the brain, like ripples on a windblown lake surface. They affect and modify the patterns shooting through the neurons. That neural activity is the "who" in this case -- that's what the brain does.

Jeremy
 
The electrical impulses are incorporated into the frenzy of electrical activity in the brain, like ripples on a windblown lake surface. They affect and modify the patterns shooting through the neurons. That neural activity is the "who" in this case -- that's what the brain does.

Not sure you get it -

1. The electrical impulses are analogous to light waves being transmitted through the various optical elements of a camera.

2. Then, in a camera, the light waves STOP and are encoded in the silver halide molucules in a film emulsion - this is analogous to the images I see being stored by some encoding in brain molecules able to store the image.

3. Finally, I LOOK at the film negative - what happens that is analogous to this in the brain? "Who" or "what" "looks at" the stored image?
 
So what you've demonstrated is that the camera analogy isn't one that works when it comes to looking at things, right?

I mean, analogies are often useful, but there are always things they don't apply to, or points at which they stop being good analogies and start being bad, misleading, or straightforwardly incoherent ones. This is basically true of all analogies, and is why they're useful but at the same time dangerous.

I don't see that there's much more to it than that, honestly.
 
I mean, analogies are often useful, but there are always things they don't apply to, or points at which they stop being good analogies and start being bad, misleading, or straightforwardly incoherent ones. This is basically true of all analogies, and is why they're useful but at the same time dangerous.

I don't misuse analogies. I don't know much about physiology, having majored in physics. Skip the analogy. What is "at the end"? Everything people talk about sight has to do with transmission of light waves, conversion into electrical pulses, transmission across synapses - but what's at THE END? What is "the end"? What is there to "look at" the image when it is at the end? How do I "see" the image once it finally gets to the end?
 
On the one hand, Eleatic Stranger is right -- the analogy isn't all that good. There may not be a perfect one-to-one correspondence between a photograph and the way our brains process images.

But, trying to answer your question the best I can using an imperfect analogy, I would say that the equivalent of the light waves stopping when they expose the film would be when the signals along the optic nerve reach the visual cortex (in the occipital lobe, at the back of the brain). There they stop and are processed (like the film being developed), before being incorporated into the rest of the brain's activity (like a person looking at the finished photograph).

That's the best I can do within the framework of your analogy. I'm no neurologist, but my impression is that things aren't really broken down into discrete steps to quite that extent.

Jeremy
 
Patrick said:
What is "the end"? What is there to "look at" the image when it is at the end? How do I "see" the image once it finally gets to the end?

Okay, so it looks like you're really asking the question, "What is consciousness?" from a rather oblique angle. You probably know as well as anyone else that no one has a complete answer to that question. What we do know is that there is a very tight correlation between our awareness and the activity in our brains -- only the craziest crackpots deny that.

Mainstream thought is that consciousness is the product (or by-product) of the hugely complicated interactions between neurons in the brain -- not just interconnected networks, but recursion, feedback loops, etc. Some of the more, er, speculative minds out there mumble vaguely about quantum mechanics having something to do with it. But still, exactly how these interactions give rise to the "feeling" of awareness we experience is still an open question.

My best idea: memory. Our brains are, at a simplified level, a machine for processing sensory stimuli and comparing it on an abstract level to other stimuli recorded earlier (i.e., our memories). But how could the brain store memories in abstract ways without some kind of agent to provide a context? I submit that consciousness is not really a state of being, but merely our brains "recording" memories in real time -- the frame at the top of the stack. Does that make any sense? Probably not. I don't have any better ideas than anyone else.

One of the better books on the subject, at least ten years ago when I read it, is Douglas Hofstadter's famous Gödel, Escher, Bach: An Eternal Golden Braid. At the very least, it'll help you understand how dizzying and complex a lot of the issues are.

Jeremy
 
I think that this is still an open research problem, so the answer probably is "nobody knows."

From what I remember, the visual cortex works in hierarchical fashion. Some groups of neurons look for vertical lines, others for horizontal lines, circles etc. The next level looks for more complicated patters, and so on. So some group in some level uses the previous levels to find faces, for instace. What exactly looks at the final output? No idea.

In any case, there are a lot of people that would love to know, like neurophysiologists, computer scientists (specially those in my field, robotics, which includes computer vision).
 
The reason that I think it's a bad analogy (and I mean this in the most broad sense possible - as in, it's just not a fruitful way of thinking about how we see things), for the record, is that it leads precisely to that sort of regress.

A point to make against it might simply be that we're asked to assume that at some point the self/mind/whatever is confronted with data gathered by the senses. In other words, the schema is supposed to look like this:

Object-->light beam --> eye --> neural firings --> brain activity in the visual cortex -->brain activity in the frontal lobe(possibly, I'm not up on the relevant neuroscience) --> a seeing of whatever object.

But the reason I think this is a bad way of thinking about it is that it's never clear exactly where those arrows stop being causal mechanistic chains and start being 'seeings' of something. In other words, why did I represent the chain going all the way to the frontal cortex before calling it a seeing?

We could say that the reason for that is that any break up till that point leads to not having seen it, and that's a decently plausible reading of the whole thing. The problem there, as you've noticed, is that this doesn't really seem to be a good enough reason. There could be some other link we're missing first, and even if not why should we assume that means that the 'seeing' happens there.


For the record, I think a more plausible way of going about the whole thing is to simply differentiate between seeing something, and the mechanism involved when you see something. In other words, the chain going from the object to the frontal cortex is -- and I don't think anyone would deny this little bit -- at least a part of the mechanism of seeing something. However, that doesn't bear significantly on the epistemology involved.

In other words, when talking about 'seeing' what you should say isn't that at some point the causal chain stops and the 'seeing' happens. What you should say is that you see the object (literally you -- including your eyes), and that the mechanism of seeing involves your eyes, your visual cortex, activity in the frontal lobe, etc.

To take a different example, you wouldn't ask the following question (hopefully):
Now, suppose I just go for a run: my feet move quickly across the pavement. They are caused to do so by my muscles flexing in rhythm. The muscles are caused to do that by neurons firing in my legs, and the neurons are caused to fire by other neurons in the brain. But, where does the running happen? What is the last "thing" that is "running"? The chain of neuron firings ends in the brain, I know that (analogous to the print), but "who" or what is "running"? See what I mean?

Now, clearly that's ludicrous -- what's running is you, and that includes your feet, legs, neurons, and the whole deal. It's not something else that can eventually be tracked down the way a photograph can be.

So, why is it so intuitive to treat "looking at something" as different from "running"?

(I think it's because of that analogy, which seemed to obvious as a starting point. But it tends to lead to sillyness -- we could possibly alter it enough to locate 'looking' in some part of the brain, and so forth(though, trust me, the problems that doing so runs into are somewhat....extreme, and no one has managed a satisfactory way of doing this so far), but wouldn't it simply be better to get rid of the whole thing entirely?)

[edited to change the terminology of a distinction, which looked confusing to me at any rate]
 
Patrick, may I recommend The Quest for Consciousness, by Christof Koch. Since his field of expertise is vision, he focuses on the consciousness of vision.

The output from the optic nerve goes to multiple places in the brain. Some ancient, nonconscious functions are performed by the midbrain. The conscious processing is performed first by the visual cortex and then by higher centers. It's quite complex, as you'd expect from haphazard evolution.

The cool example of how vision is performed by multiple parts of the brain is the person with certain damage to the left visual cortex. When shown an object in their right field, they say they cannot see it. When given the command "pick up the vase," they do just fine. The latter is a nonconscious action mediated by direct connection from the eye to the muscles, bypassing conscious thought.

~~ Paul
 
How am I supposed to concisely explain the interconnection of half a billion microscopic switches and their myriad interconnections to you?

I couldn't adequately make a second person undestand how an electromechanical pinball machine works in one sitting, and even still that would only generally explain the one brand of machine. Far simpler than a brain.

For instance, a lot is understood about invertebrate visual and nervous systems, because the same kind of bug is always wired the same. They can dissect and NAME various structures and subsystems, and refer back to them. Also the limited number of cells and interconnects makes it possible to effectively model them.

For instance, a particular structure in a locust holds promise for simplifying collision detection in robotics.

Theory: Simulating locust vision
http://www.imse.cnm.es/locust/publications/conferences/KeiRod03slides.pdf

Application: Automotive safety
http://news.nationalgeographic.com/news/2004/08/0806_040806_locusts.html

Now an inseact brain is much simpler than a human brain. About a million times (and more) simpler. But this is a step. To understand how a bug percerives is still a long way off from how a mammal perceives, and a human perceives. The fact that this is 'knowable' is promising.

Of course, any good Christian can tell you that a bug doesn't have a soul. One even told me that's why it's OK to kill them. I wonder what makes all those 'soulless' things operate and perceive and hear and see, if they're 'soulless'?
 
If the photographic print is locked away in a closet, Who or what is looking at it?

Light-induced changes have produced a chemical landscape that will reflect light in such a way that an observer will interpret a pattern that corresponds to a pattern the observer understands. Similarly, nerve-impulse-induced changes in neurons produce a neurological landscape that is also recognized.

In the case of the brain, the "snapshot" is not preserved with a direct correspondance to every detail--some filling in and interpretation is involved; this also occurs when a scene is recalled, hence the possibility of a recalled event being different that the original event.
 

Back
Top Bottom