Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.
Would the robot be conscious if we ran the computer at a significantly reduced clock speed?
I think so. Take film projection as an apt[?] analog of consciousness: each film cell is an instant of information flashed from the subconscious. Slow down the projector and the viewer begins to notice the breaks between cells. But here the subconscious 'projector' and the conscious 'viewer' are part of the same system, so the viewer is slowed down too; it shouldn't experience any change, though to an outsider the robot would appear a lot slower, 'stupider', taking much longer to process information.
What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?
As long as the single-stepping / hand-execution and consequent waiting around for results isn't part of the robot's consciousness, it shouldn't experience anything peculiar. The occasional interesting output would be relayed to the robot's consciousness for real-time evaluation -- integration with the robot's current understanding or itself and its environment, processing by algorithms which aren't innate to the robot or committed to memory (e.g., following instructions in a book), etal -- as well as the constant 'stream of consciousness' monitoring of environment -- where it is, how it 'feels', threat assessment -- and task at hand -- semantic labels and associations that may appeal to 'reason', suggest solutions, new priorities, queue up other tasks, etc.
In the pencil and paper model, I'm imagining the robot's consciousness of surroundings presented in the form of a flipbook of child's crayon sketches; imperfect, but functional ("that's beautiful, honey; so I'm on a, um... plane? oh, eating dinner -- those are potatoes, not clouds... s'wonderful, you're really talented").
I can't take credit for these questions; they were posted on another forum. The following paper is relevant to this issue:
http://www.biolbull.org/cgi/content/abstract/215/3/216
~~ Paul
Have only skimmed beyond the abstract, but I like his definition of
consciousness as "integrated information", integrated
for consciousness, more precise than the standard "information processing", most of which is unconscious algorithms, assembly, compilation, and integration from bits, if the computational theory of mind is correct. The concept of 'qualia-space' looks interesting too: one-to-one mappings from bit matrices to experience; actual nuts-and-bolts (vs our very 'what-if' discussions in forum).
Thanks for linking, Paul. Hope to read it over this week.
