• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
So far, I've seen no evidence that neurons are more than simple switches and need gazillions of internal quantum switches. A paramecium doesn't need a supercomputer to get around obstacles. I've written computer programs that let entities defeat obstacles in ways that really appear conscious, but in fact use simple algorithms.


There may be a difference between reception and transmission of signals. There's quite a bit of evidence for quantum events being significant at the level of smell. Yet there are already several machines that can distinguish smells - at least some things. However, here you are dealing with the receiver not the transmitter. Whether creating indistinguishable quantum signals is a necessary element for recreating human thought, I don't know.

I certainly think a robot is always likely to smell very different to a human.
 
There may be a difference between reception and transmission of signals. There's quite a bit of evidence for quantum events being significant at the level of smell. Yet there are already several machines that can distinguish smells - at least some things. However, here you are dealing with the receiver not the transmitter. Whether creating indistinguishable quantum signals is a necessary element for recreating human thought, I don't know.

I certainly think a robot is always likely to smell very different to a human.

Robots smell to us like metal, plastic, grease, and phenolic circuit boards :wink:

That's sensory, not computational. I'm talking about quantum computers. There's no evidence for them in nerve cells.
 
Last edited:
It seems to me that (with a suitably broad interpretation) both the first and the third option in the poll can be seen as valid from a physicalist viewpoint.

Planet X in the third option could be Earth, the unconscious biological beings could be our distant ancestors, and the conscious machines could be us...

Just a thought :)

The unconscious machine was evolution. The planet X option was my joke that turned out unexpectedly insightful.
 
We have not achieved 100% altruistic behavior as a conscious species ourselves despite huge amounts of " programming" through research, education, communication and culture. What makes you think there is any possibility of programming 100% altruistic behavior into a conscious machine? Are you suggesting ethics is an objective science which can be mathematically proven? Because without 100% certainty of a conscious machine being altruistic, which conscious human wants to put there lives at risk to a conscious machine with superior brute force?

While it might not be impossible to achieve 100% altruism (There is nothing in the laws of physics to prevent it.), I don't think it is a goal we can realistically expect to achieve. Science shows us that when we get close, everyone loses out to the remaining selfish jerks, even more-so. This is a counter-intuitive idea. But, here are a small number of things leading to it:

* We know that altruistic behaviors can evolve out of fundamentally selfish systems: We have seen altruism emerge spontaneously (without being explicitly programmed) in several evolutionary and neural net simulations. We have good evidence that this happens in the wild.

* However, all successful systems have parasites. (Even successful parasites have parasites.)

* When altruism of a population reaches very high levels, close to 100%, but not quite there: They become too trusting. The whole population gets severely exploited by the remaining small population of parasitic and/or selfish entities in general. This becomes detrimental to the whole population, including (ironically) the exploiters (at least in the long run, not usually in the short term).

* Keeping a small percentage of non-altruistic members around actually benefits the population in the long run, because it keeps everyone on their toes a little more. A small number of entities get exploited, so that more of them don't.

* A small percentage of humans have significant sociopathic or psychopathic tendencies: Usually around 1 or 2%, depending on what factors you look at.

Chances are, a strong A.I. would follow the same patterns.

But, if I am wrong, and computer-based A.I. can achieve truly 100% altruism, without any risk of exploitation, then: What's wrong with that? Why would that imply that the history of humanity is a joke? We would still be around to enjoy our lives, even with 98% altruism.


There's a thing about consciousness that people don't get. You aren't really conscious and in control. You only think you are.
If that's true, it's still worth exploring how that "sense of being in control" comes about.

It's still an odd little mystery, at the moment. But, what we find out about the brain along the way has been fascinating, and should continue to be so.
 
Prediction is a different realm than computability because of chaos theory. A computer can simulate the weather, the stock exchange, and wine yields. That it can't predict the future EXACTLY is a red herring.
Which weather, which stock market, which wine yields?
 
While it might not be impossible to achieve 100% altruism (There is nothing in the laws of physics to prevent it.), I don't think it is a goal we can realistically expect to achieve. Science shows us that when we get close, everyone loses out to the remaining selfish jerks, even more-so. This is a counter-intuitive idea. But, here are a small number of things leading to it:

* We know that altruistic behaviors can evolve out of fundamentally selfish systems: We have seen altruism emerge spontaneously (without being explicitly programmed) in several evolutionary and neural net simulations. We have good evidence that this happens in the wild.

* However, all successful systems have parasites. (Even successful parasites have parasites.)

* When altruism of a population reaches very high levels, close to 100%, but not quite there: They become too trusting. The whole population gets severely exploited by the remaining small population of parasitic and/or selfish entities in general. This becomes detrimental to the whole population, including (ironically) the exploiters (at least in the long run, not usually in the short term).

* Keeping a small percentage of non-altruistic members around actually benefits the population in the long run, because it keeps everyone on their toes a little more. A small number of entities get exploited, so that more of them don't.

* A small percentage of humans have significant sociopathic or psychopathic tendencies: Usually around 1 or 2%, depending on what factors you look at.

Chances are, a strong A.I. would follow the same patterns.
But, if I am wrong, and computer-based A.I. can achieve truly 100% altruism, without any risk of exploitation, then: What's wrong with that? Why would that imply that the history of humanity is a joke? We would still be around to enjoy our lives, even with 98% altruism.

Huh..chances are? You seem to have forgotten the basics of evolution which is the background to the data you presented on human behavior. There is no random mutations of AI machines and they do not self-replicate through sexual reproduction. How would they follow the same patterns of beings whose very existence depended on evolution? And if they somehow did follow these patterns because that's what we programmed them to do why in the world would we want to create an AI, with much greater physical strength than any human, which has the possibility of being a psychopath? You think two F16's with psychotic tendencies is a rational idea because the other 98 don't ? :boggled:


The reason why human history is important is because its the only empirical evidence we have to judge whether a conscious being could achieve 100% altruism and as you rightly pointed out this is not the case.
 
This programming for 100% altruistic behavior fails because it goes against our evolved nature, like our programming for healthy eating fails against our imperfectly evolved tastes.

We'd simply program HAL to be nice to people and follow Asimov's Three Laws of Robotics. It wouldn't be rocket science ;)

Oh well that's a relief.
I thought the computationalists were going to create conscious robots who make mistakes like the conscious people they were modeled on.:rolleyes:
 
There is no random mutations of AI machines and they do not self-replicate through sexual reproduction.

This is not true. Machines have been evolved with random mutation, selection, and reproduction techniques. It works! It also has the same problems and limitations as biological evolution.

Conscious machines could be evolved, though I'm not sure what would be selected for.
 
The ones which actually physically happen.
History does not count when it comes to physical reality it is all in the mind.

I'm really discouraging derails like this, if you don't mind.
 
Oh well that's a relief.
I thought the computationalists were going to create conscious robots who make mistakes like the conscious people they were modeled on.:rolleyes:

Hey, unconscious computers make mistakes. Watch Watson play Jeopardy. Conscious computers would be less likely to make certain kinds of mistakes. We evolved through a highly error prone process, but I don't see how consciousness in and of itself causes our stupidity.

Kluge: The Haphazard Evolution of the Human Mind (Gary Marcus)
 
There is no random mutations of AI machines and they do not self-replicate through sexual reproduction.
Wrong. In many cases, they do: The virtual entities, anyway, can go through functions in which they sexually reproduce, and could obtain random mutations, etc.

These virtual entities are not conscious, yet. But, it seems that hitting upon altruistic behaviors is a lot more fundamental than achieving consciousness.

How would they follow the same patterns of beings whose very existence depended on evolution?
So, far, they already do.

Some clever person might come up with a realistic way to have 100% altruism emerge from an evolutionary system, without external intervention. But, the realistic simulations we have, so far, don't get there.

And if they somehow did follow these patterns because that's what we programmed them to do
Absolutely NOT! I am ONLY referring to patterns that emerge without any explicit programming for those patterms!

I am only referring to emergent behaviors, here. I suspect consciousness is also an emergent property of our brains.

why in the world would we want to create an AI, with much greater physical strength than any human, which has the possibility of being a psychopath?
Curiosity, for one thing. We might have better, more practical reasons than that. But, whether we should do this, or not, is not part of the discussion.

The reason why human history is important is because its the only empirical evidence we have to judge whether a conscious being could achieve 100% altruism and as you rightly pointed out this is not the case.
I think this is a sad statement to make. And, also off topic. There are plenty of reasons to enjoy human history without worry how far from 100% we are in the realm of altruism.

For one thing, history is full of reasons why bad ideas and arguments are bad. And, I think we are getting better at figuring that out.
 
Last edited:
Wrong. In many cases, they do: The virtual entities, anyway, can go through functions in which they sexually reproduce, and could obtain random mutations, etc.

These virtual entities are not conscious, yet. But, it seems that hitting upon altruistic behaviors is a lot more fundamental than achieving consciousness.

So, far, they already do.

Some clever person might come up with a realistic way to have 100% altruism emerge from an evolutionary system, without external intervention. But, the realistic simulations we have, so far, don't get there.

Absolutely NOT! I am ONLY referring to patterns that emerge without any explicit programming for those patterms!

I am only referring to emergent behaviors, here. I suspect consciousness is also an emergent property of our brains.

Curiosity, for one thing. We might have better, more practical reasons than that. But, whether we should do this, or not, is not part of the discussion.

I think this is a sad statement to make. And, also off topic. There are plenty of reasons to enjoy human history without worry how far from 100% we are in the realm of altruism.

For one thing, history is full of reasons why bad ideas and arguments are bad. And, I think we are getting better at figuring that out.

Virtual entities!!!
I see so mathematical models run on a computer are what we should judge AI progress by?
Okay now I don't care anymore about this discussion, it's irrelevant to everyone who lives in the real world.
Let me know when you think of building physical examples of your virtual entities that may impact reality and we can pick up the discussion again.
In the meantime have fun playing in virtual land.
Try not to use too
much electricity whilst you play it's bad for those who need rely on reality for survival.
 
There's a thing about consciousness that people don't get. You aren't really conscious and in control. You only think you are. This Horizon episode is well worth watching. It says "only 10 hours left to view", and maybe in some parts of the world you can't view it, but if you can it's well worth it.


Here is the same episode on Youtube... no time limits or country restrictions


It is VERY interesting.





I also recommend this video to see the facts of where we stand in regards to the possibility of Pinocchio becoming a reality.

 
Last edited:
The exact weather in London on January 21st 2013.
The exact movement of the NYSE from September 1st 2015 to November 2nd 2015.
The exact yield of wine grapes from the Loire wine region in France in 2012.

You know future physical events which are practically unpredictable.

3-body problem. Much simpler. :D
 


There we go again..... repeating your behavior over and over again.... you remind me of that sphex wasp you like
I hope you get the point. No matter how many times the researcher moves the prey insect, the wasp's behaviour will not vary. It has no capacity for reflection into its own processes.


In case someone missed it... this is what Pixy means by his "no"
A simple no only works when you have established your position and the other party is talking nonsense.
 
Last edited:
I am currently almost done with the presentations in this:

http://www.aisb.org.uk/publications/proceedings/aisb05/7_MachConsc_Final.pdf

Of particular interest are the two or three papers on research dealing with recurrent neural networks based on a very rough model of brain connectivity.

In two of those papers, they got a robot to effectively "imagine" what the results of an action would be, and use that "imagined" result to modify the choice of current action. All with series of recurrent ANNs.

I don't think it is a question anymore whether a suitably complex artificial neural network could display full consciousness, I think it is just a question of what the topography needs to be in order to get it to work.

EDIT: When I am finished with the whole packet I will provide a summary of each presentation, this stuff is really cool and it is 7 years obsolete. I can't wait to get my hands on the more recent stuff.
 
Last edited:


This poll is a false dichotomy...especially when Scott himself has admitted that the third choice was

...The planet X option was my joke ...


The false dichotomy is
You either agree with his SPECULATIONS and CONJECTURES or you are a WOO BELIEVER​

It is not just a false dichotomy...it is an egregious insult to anyone who sides with the scads of scientists who disagree with his FAITH in SCIENCE FICTION.


Before this thread degenerates into more nonsensical armchair speculations from laymen along with vitriolic hubristic defense of these conjectures by citing scifi fanfic along with adamant unwavering “monumentally simplistic” “operational definitions” that are “of no practical value”... and before it gravitates towards hypotheses of how the characters in the Sims video game are conscious entities if only you could redefine reality to suit.... and before it settles down to wishful thinking and aspirations of some laymen for becoming Deos Ex Machinas.... I suggest you watch this video to see the facts of where we stand in regards to the possibility of Pinocchio becoming a reality.

The following minutes are of salient relevance
  • 30:10 to 32:20
  • 34:55 to 41:45
  • 42:12 to 45:05 (especially 44:43-45:00)
  • 56:55 to 57:35
  • BUT....ABOVE ALL.... minutes 48:50 to 50:40.....especially the sentence the scientist says at minute 50:08 to 50:10.

 
Last edited:
The Penrose-Hameroff Orch-OR (orchestrated objective reduction) consciousness theory has been widely criticised and generally found to be a chain of unsupported speculation (e.g. Gaps in Penrose's Toilings). Doesn't mean it can't be true, but there's no good reason to think it might be; it's an unnecessary and unnecessarily speculative hypothesis. There are also good QM reasons to doubt it, e.g. Max Tegmark calculated that quantum decoherence is many orders of magnitude too fast for QM to play a direct role.

As the person who mentioned Penrose--I completely agree with calling it "an unnecessary and unecessarily speculative hypothesis". I'm not trying to defend the hypothesis. I also shan't try to defend hypotheses two and three of the present poll. I'm just saying that Penrose's theory doesn't appear on this poll, and therefore those who do subscribe to it, for whatever reason, don't really have an option to vote for here.

Assuming that the poll items are listed in order from least to most woo-ey, I'd put Penrose's hypothesis between one and two. Something like:

  1. Consciousness depends on known physical processes, and can be simulated, at least in theory, on a general purpose computer
  2. Consciousness depends on known physical processes which are too "quantum" or "chaotic" or otherwise, somehow, inherently beyond our ability to compute.
  3. Consciousness depends on elements beyond physics, possibly beyond our known universe, and beyond our ability to detect.
  4. On Soviet Planet X, Consciousness thinks you. :)

I'd still pick the first one, but at least everyone would have an option to pick.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom