• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

If it's more than a parlor trick I would be extremely impressed. The claim is that he was writing the same thing in different languages with each hand, no? I guess that's quite a bit easier than writing different things. Anyway, your time slicing argument would almost certainly cover this.

As reviously mentioned though, you could easily use multiple people.
 
As reviously mentioned though, you could easily use multiple people.

Well, what I'm getting at is that I don't think this thought experiment is getting us anywhere.

If we want to ask a question about consciousness, we need to consider the human brain. What's the point in considering some other type of brain which must, essentially, be identical?

And if it's a different type of brain that also produces consciousness that we want to consider, then before we engage in a thought experiment, we must describe how that different sort of brain produces consciousness.
 
I agree that all these types of threads are pretty silly. Such big assumptions have to be made about consciousness that relly all one is doing is looking at the consequences of those assumptions rather than the actual properties of consciousness.
 
If you claim that one person can handle everything that's going on in the brain simultaneously, rather than sequentially, then I disbelieve you.
I agree a real human will have problems, but you did not seem to confirm that question of mine about whether it is for human fallibility (you gave a different answer).

But let's assume that's not what we're talking about, and instead we're talking about, say, a staff of hypothetical infallible humans working at the same time.

right, that's what I thought you were talking about -- well, one such human anyway.

Ok, then what are we really talking about?

Let's say that we've got this robot wired up, and all we do is to introduce a step where there's essentially just a pause at one moment when whatever calculations are happening are switched over to our staff of mathematically infallible humans who get the outputs as inputs, run the calculations, then feed the results back in.

In that case, you're just doing the equivalent of putting the robot on pause.

Is the robot conscious during this pause? Given that nothing is happening in the robot's hardware, how could it be?

I think you're confusing where we're hypothecating the conciousness to be. The conciousness is an artefact of the system. We've now made a bunch of humans, with pencils and paper part of that system. Their symbol manipulation continues to manifest this hypothetical conciousness. (not to be confused with the human's own conciousness of course).

I certainly agree with you that if you put the robot into hibernation, advance its state by an external agent and then load the advanced state on an unhibernated robot, that the robot is not concious during hibernation. But that's an uninteresting question. I could, for instance replace the horde of infallible humans by a second instance of the robot, and then just move the state from one robot to the other, alternating which one is on and which one is off -- where's the conciousness?
 
I think you're confusing where we're hypothecating the conciousness to be. The conciousness is an artefact of the system. We've now made a bunch of humans, with pencils and paper part of that system. Their symbol manipulation continues to manifest this hypothetical conciousness.

How, when the hardware is now idle?
 
Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.

Would the robot be conscious if we ran the computer at a significantly reduced clock speed?

I think so. Take film projection as an apt[?] analog of consciousness: each film cell is an instant of information flashed from the subconscious. Slow down the projector and the viewer begins to notice the breaks between cells. But here the subconscious 'projector' and the conscious 'viewer' are part of the same system, so the viewer is slowed down too; it shouldn't experience any change, though to an outsider the robot would appear a lot slower, 'stupider', taking much longer to process information.

What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

As long as the single-stepping / hand-execution and consequent waiting around for results isn't part of the robot's consciousness, it shouldn't experience anything peculiar. The occasional interesting output would be relayed to the robot's consciousness for real-time evaluation -- integration with the robot's current understanding or itself and its environment, processing by algorithms which aren't innate to the robot or committed to memory (e.g., following instructions in a book), etal -- as well as the constant 'stream of consciousness' monitoring of environment -- where it is, how it 'feels', threat assessment -- and task at hand -- semantic labels and associations that may appeal to 'reason', suggest solutions, new priorities, queue up other tasks, etc.

In the pencil and paper model, I'm imagining the robot's consciousness of surroundings presented in the form of a flipbook of child's crayon sketches; imperfect, but functional ("that's beautiful, honey; so I'm on a, um... plane? oh, eating dinner -- those are potatoes, not clouds... s'wonderful, you're really talented").

I can't take credit for these questions; they were posted on another forum. The following paper is relevant to this issue:

http://www.biolbull.org/cgi/content/abstract/215/3/216

~~ Paul

Have only skimmed beyond the abstract, but I like his definition of consciousness as "integrated information", integrated for consciousness, more precise than the standard "information processing", most of which is unconscious algorithms, assembly, compilation, and integration from bits, if the computational theory of mind is correct. The concept of 'qualia-space' looks interesting too: one-to-one mappings from bit matrices to experience; actual nuts-and-bolts (vs our very 'what-if' discussions in forum).

Thanks for linking, Paul. Hope to read it over this week. :)
 
Last edited:
Which hardware? The humans, pencils and paper are the hardware, when that's executing the program.

And that hardware is designed to create consciousness when the software runs on it?
 
And that hardware is designed to create consciousness when the software runs on it?

I fail to understand your point. I thought we were talking about a program that could be executed on any Turing complete hardware. To me that includes paper and pencil operated by a human.
 
Piggy said:
Why would it work in this case?
Man, this is one of the lowest bandwidth conversations I've had in quite some time. :D

Can the robot brain algorithms detect that they are being run at a very slow speed (by hand simulation)? If not, then it should not matter, regardless of whether they are normally run on a uniprocessor or multiprocessor. If they have some way of detecting the running speed (e.g., by comparison to an independent wall clock), then it might matter.

I can hand simulate a multiple-process algorithm by time-slicing my simulation, in the same manner as a uniprocessor runs multiple processes.

~~ Paul
 
Is the system conscious at any bandwidth? Nobody knows for sure.

Is the system conscious at reduced bandwidth? Nobody knows for sure.
 
Warning: The following may be complete bobbins:

Would a 'conscious' machine, be it electronic or otherwise, need to be moving at a sufficient speed for it's actual nuts and bolts operation to be moving too quickly for it's generated consciousness to detect?

(I did warn you, it's late and I'm tired)
 
I fail to understand your point. I thought we were talking about a program that could be executed on any Turing complete hardware. To me that includes paper and pencil operated by a human.

We're talking about a robot that has something equivalent to a brain which runs software that produces consciousness in much the same way that the human brain does.

OP said:
Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.
 
Can the robot brain algorithms detect that they are being run at a very slow speed (by hand simulation)? If not, then it should not matter, regardless of whether they are normally run on a uniprocessor or multiprocessor. If they have some way of detecting the running speed (e.g., by comparison to an independent wall clock), then it might matter.

I can hand simulate a multiple-process algorithm by time-slicing my simulation, in the same manner as a uniprocessor runs multiple processes.

The algorithms don't need to detect anything.
 
Piggy said:
The algorithms don't need to detect anything.
I give up. Your responses are so terse that I have no idea what you're trying to say. I do not understand why you think hand simulating a robot brain algorithm wouldn't work. And if you keep responding in 1-sentence fragments, I never will.

Yes, we do. That is stipulated.
No it isn't.

~~ Paul
 
nathan said:
So, although in theory you could use a pencil and paper to simulate the original assumed machine, it might take you longer than is feasible to simulate anything related to a 'concious event' in the simulated conciousness.
What is feasible? Anyway, point taken.

However, what I'm interested in is the question of whether the algorithms might be real-time-sensitive in some way that makes them behave consciously at certain speeds but not at others.

~~ Paul
 
I give up. Your responses are so terse that I have no idea what you're trying to say. I do not understand why you think hand simulating a robot brain algorithm wouldn't work. And if you keep responding in 1-sentence fragments, I never will.

Well, we're even, b/c I can't fathom how you can possibly imagine that working out your program by hand without running it on the hardware will be tantamount to running it on the hardware.

By comparison, let's say I'm writing software that will, when running on my client's system (and I mean business client) allow field agents to remotely complete or update inspection reports.

And let's suppose that I'm developing a particular module for that software.

I can work it all out on paper and check it and recheck it and make sure it's kosher.

But it doesn't do what it's designed to do until and unless I actually compile it and run it on the system that it's designed to run on.

Similarly, if you have software the has the effect of producing conscious awareness in a robot brain when it's properly running, it will only work when actually running in the robot brain.

You can work it all out on paper til you die, but it's not going to produce consciousness unless it's running in the robot brain.


No it isn't.

I think we may be interpreting this sentence differently.

Is the system conscious at any bandwidth? Nobody knows for sure.

I took that to mean "Is the system conscious at any bandwidth at all?"

You could also take it to mean "Is the system conscious at any given (i.e. every possible) bandwidth?"

It is stipulated that the system is conscious when the software is running in the robot brain, so the former reading is indeed stipulated.

It's the latter interpretation which is at issue.
 

Back
Top Bottom