• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Piggy said:
If you propose a robot that has a computer brain that produces consciousness, then it happens in the robot.

Have you ever worked out the logic for a program, then compiled it and run it to see if it works? Why doesn't it "work" when you just calculate it? Simple -- because you're not running it on the hardware.
It doesn't work while I design it, but it does work if I hand-simulate with pencil and paper.

Let's say you traced the exact sequence of neuron firings in a brain for a given conscious event. You would not expect that diagramming that sequence of firings would produce any such conscious event.
I think you were taking my phrase "hand-simulate it" to mean perhaps reviewing the code, not actually simulating the execution with pencil and paper, step by step.

And that's a big ol' pile of assumptions you got there.
I was simply trying to enumerate the requirements that would make processing speed irrelevant. And even processor type, for that matter. With enough constraints, it shouldn't matter how fast the processor is, or even whether the processor is a hand-simulation. But I doubt the brain meets those requirements.

~~ Paul
 
sol invictus said:
As in, the human questioner gets a written answer some time after asking? OK, then the answers are yes, yes, and yes.

See what I mean?
I don't think it's that simple. It depends on whether the robot brain meets my list of requirements for it to be independent of processor, processing speed, etc.

I think we're pretty much all in argumentative agreement here. :D

~~ Paul
 
shuttIt said:
I agree that the only practical test seems to be a Turing type one, but it seems to me one is then in a position of defining consciousness to fit the only available test.
I don't know how else we can do it.

~~ Paul
 
I don't know how else we can do it.

~~ Paul
I agree, there is probably no other way. You seem to have introduced an assumption that all that exists is testable. It seems to me that the subjective experience of consciousness, as opposed to a behaviorist description of consciousness is what I mean by consciousness. If we to restrict ourselves to that which is testable then fine... it's a pragmatic solution... I still wouldn't step into the teleporter.
 
Does consciousness also relate to autonomy? If you can turn down the clock speed does the robot actually have consciousness as surely one of the things that makes us conscious is autonomy which by having it under control you are denying?

Steve
I'm curious what consciousness and autonomy have to do with one another. Please define this autonomy.
 
shuttIt said:
I agree, there is probably no other way. You seem to have introduced an assumption that all that exists is testable. It seems to me that the subjective experience of consciousness, as opposed to a behaviorist description of consciousness is what I mean by consciousness. If we to restrict ourselves to that which is testable then fine... it's a pragmatic solution... I still wouldn't step into the teleporter.
Yes, I know that we really want to address that sneaky subjective experience of what it's like to be conscious. But how? The only approach I can imagine is for neuroscience to get to the point where we can explain and control every aspect of subjective experience. Then we will understand it objectively even if we still can't witness another person's inner experience.

If consciousness is some immaterial thingie inhabiting brains, we could still possibly build a robot with the appropriate hooks so that an immaterial consciousness would invade its brain. These questions would still be relevant.

Then the robot could step into the teleporter on your behalf.

~~ Paul
 
Yes, I know that we really want to address that sneaky subjective experience of what it's like to be conscious. But how? The only approach I can imagine is for neuroscience to get to the point where we can explain and control every aspect of subjective experience. Then we will understand it objectively even if we still can't witness another person's inner experience.

If consciousness is some immaterial thingie inhabiting brains, we could still possibly build a robot with the appropriate hooks so that an immaterial consciousness would invade its brain. These questions would still be relevant.

Then the robot could step into the teleporter on your behalf.

~~ Paul
You're still assuming that there is a test that you can perform to tell whether I am subjective-conscious, or not. If your neuroscience is able to turn my subjective-consciousness off, will I answer appropriately to the question "are you subjectively conscious?" I'm not sure. Perhaps I would. It's making my brain hurt trying to figure it out.
 
shuttIt said:
You're still assuming that there is a test that you can perform to tell whether I am subjective-conscious, or not. If your neuroscience is able to turn my subjective-consciousness off, will I answer appropriately to the question "are you subjectively conscious?" I'm not sure. Perhaps I would. It's making my brain hurt trying to figure it out.
If we could turn off individual subsystems, then I think you would experience and demonstrate things like blindsight or prosopagnosia. Then we will have demonstrated the piecemeal nature of consciousness in spite of its apparent wholeness.

~~ Paul
 
I was simply trying to enumerate the requirements that would make processing speed irrelevant. And even processor type, for that matter. With enough constraints, it shouldn't matter how fast the processor is, or even whether the processor is a hand-simulation. But I doubt the brain meets those requirements.

If the only functioning example we can be sure about does not meet those requirements, then why are they relevant?
 
I don't know how else we can do it.

I've outlined a process that actually has a chance of working.

Your argument seems to be like that of the drunk who drops his keys into the weeds, but goes looking for them under the streetlamp because the light is better.
 
You're still assuming that there is a test that you can perform to tell whether I am subjective-conscious, or not.

Why is such a test necessary.

We know the brain produces consciousness, so if you have a functional brain and are able to hold a conversation at all, that's really all we need.

Like the man says, don't go looking for ducks to feed.
 
If we could turn off individual subsystems, then I think you would experience and demonstrate things like blindsight or prosopagnosia. Then we will have demonstrated the piecemeal nature of consciousness in spite of its apparent wholeness.

Some of that work has already been done, like the case of Marvin (emotional blindness) I posted in a similar thread. I'll see if I can repost that link.
 
If we could turn off individual subsystems, then I think you would experience and demonstrate things like blindsight or prosopagnosia. Then we will have demonstrated the piecemeal nature of consciousness in spite of its apparent wholeness.

~~ Paul
We're talking about different stuff. Of course you can turn off individual subsystems, or at least I'm happy enough assuming it. If you turn off the bits related to speech I won't be able to speak and it'll probably play havoc with my ability to think. How is this consciousness...? It's information processing. I can imagine that system working in the absence of me having a subjective experience of consciousness just as easily as with.
 
Why is such a test necessary.
So that when someone does an experiment that stops me, or you being subjective conscious then can put a tick in the right box.

We know the brain produces consciousness, so if you have a functional brain and are able to hold a conversation at all, that's really all we need.
I'd be being pedantic not to agree with you.

Like the man says, don't go looking for ducks to feed.
I don't think I am. It just seems to be that there are aspects to consciousness that aren't obviously accessible to scientific inquiry. One solution is to assume there is a one to one mapping between the subjective experience and the action of the brain, or parts there of.... fine. I think that's at least as good an explanation as any, I just don't see how one would go about testing this.

What would be missing from the pencil and paper consciousness, if not the subjective experience?
 
I think clock speed is a big, fat red herring. If the algo works, it works. You might just not percieve its conciousness because your timescale is different.

That's my take, anyway.
 
It depends on your take of consciousness, Paul.

If you hold the view that consciousness is a property of information alone, that just happens to find a physical instantiation in human brains, then mathematically the "rate" is irrelevant.

If you hold the view that the substrate itself is somehow intrinsic to consciousness, beyond the behavior it offers information that might be instantiated upon it, then "rate" could indeed be important.

Note, however, that the latter is mathematically incoherent and could be viewed as a branch of straight up dualism.

Also, note that the former is a statement about the whole system, not just the consciousness. In other words, if you slowed down a human brain, and not the world around it, there would definitely be a difference perceived by at least the individual.
 
Regarding speed, when you're dealing with effects of physical systems, you can encounter windows of viability, and I believe that's the case w/ consciousness.

For example, a heart that beats too slow or too fast over a long enough period of time will fail to do what a heart is designed to do, resulting in failure of the entire biological system.
 

Back
Top Bottom