• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Sentience. How would it be recognized?

Isn't that the Shroud of Turin Turambar?

Oops. Forgive me. I asked the wrong question.

I was supposed to ask you to tell me what you feel when looking at the picture. My bad.

(Also, this is part of the test too. :) )
 
The question is, can one have true sentience and self awareness without the uncontrolled emotions and other thought patterns that humans are prone to, not to mention simple basics like certain mortality, pain, disease and more?

Can we ever build a machine that has the same capacity to suffer, or love, and why would we consider imposing something like, say, peridontal disease on something we create as many believe gods did to us?

This theme has been explored in plenty of scifi stories :)

From the world of science fact, I don't generally have a good answer for you. I've followed deep learning/deep belief networks for a long time, and have seen some really impressive demonstrations, including an algorithm which independently discovered the concept of cats without being told what they were behand, and robots learning fine motor tasks like cooking by watching Youtube videos.

There are also some quite impressive demos of artificial neural nets being programmed using a genetic process, resulting machines discovering how to perform very novel tasks without being told how to do it. I've also seen some really impressive demos of natural language processing, in which a neural net is able to find relationships between words (such as "Man -> Woman", "King -> Queen", "France -> Europe") without being told what those relationships were ahead of time.

While these kinds of behavior appears thoughtful and intelligent, all of the artificial neural nets I'm aware of have little more intelligence than insect. That is, purely reactive, no forethought or purposeful contemplation whatsoever.

I believe this barrier can be overcome, with a sensibly designed neural net. A human brain is such an efficient processor because regions of the brain are partitioned into regions which perform a specialized task, such as visual processing, symbolic processing, memory recall, motor coordination, face recognition, pattern and form recognition, language production. All of the areas of functional specialization feed into one another, logically linking visual input with other important attributes such as it's name, boundaries, and "concept" of what the object in our field of vision is doing.

Following from the design of human brains, I believe it's very much possible to design a very complex neural net from several simpler, but highly specialized neural nets. I think strong natural language processing could help the neural net relate concrete images, like a cat, with attributes connected to cats ("feline", "soft", "meow", "house pet", "mice", "catfood"), allowing a neural net some rudimentary ability to build up an abstract "concept" of cats and other items. We might also imagine that our neural net can find relationships about abstract concepts, such as differentiating static concepts ("feline", "fur") with temporal concepts ("active", "hungry"), as well as spatial relationships ("cat" -> "under table", "cat" -> "outside"). The grand neural net's experience of the world is the synthesis of all of it's smaller neural nets in tandem.

It's interesting to consider how a neural net of this design might "think". For comparison, human have a nearly constant stream of internal monologue making up most of our thoughts; we also make plans, dwell on decisions, set long and short-term goals for ourselves. The major advantage of human thought over artificial neural nets is our ability to derive new information for existing data. Left to idle on it's own, it's hard to imagine that a machine is constructing new information, although it might be capable of discovering new relationships between data it has already stored. It may be possible for a machine to discover certain expectations of human interactions, without necessarily being told what those expectations were ahead of time. Likewise, a machine might infer a weak relationship between two concepts, like "cats" and "humans", and attempt to determine whether there is a stronger relationship connecting those concepts, namely that humans like to be around cats. Artificial intelligence of this sort quickly approaches the territory where it's appropriate to call that machine "sentient".

The holy grail of sentient machines are emotional machines. I don't see any reason, in principle, why emotional processing could not be another functional specialization of a neural net. The challenge here is that emotional responses are dependent on many highly complex, abstract inputs. Additionally, it's difficult in principle to define what an emotion is and how it affects the high-level behavior of our neural net. Most humans would instinctively leap back from a rattling snake in the wild, or feel sad in response to losing a friend or family member. An interesting question is whether a sufficiently advanced artificial neural net could have "wants", "attachments", or "interest" in particular things or people. A scary, but realistic outcome is whether a machine might infer the wrong emotion, like relating concepts like hurting people to happiness.

If and when humans begin designing machines with emotional intelligence, I expect that we will explore very primitive emotions response, fight or flight. From there, we might reasonably be able to abstract more complex responses, like fear, surprise or relief. I can see these building up to the familiar emotions like happiness, sadness, pleasure, pain, etc. It would be important for an artificial neural net to infer to emotional state of people and animals around it, in order to build up very complex emotions like empathy, embarrassment, shame, and familiarity. From there, these emotional responses could generalize to curiosity and compassion, which may logically lead to responses analogous to love and justice.

The most interesting aspect of the entire discussion above is that we have, right this second, the technical knowledge and skill to construct very sophisticated artificial intelligence with many areas of functional specialization. IBM Watson is probably the best example of this, which has 40+ different subsystems, and Siri, which has several dozens of it's own subsystem. Of course, neither Watson nor Siri are purpose built for sentience. However, I believe it is possible to construct an artificial intelligence which, within my lifetime, progressively approaches childlike degrees of sentience and autonomy. :)
 
Last edited:
In the evolution of our own consciousness, emotions came first. In the form of basic survival reactions that are still hard-wired in the more primitive portions of our brains. As we know, these primitive reactions can easily overwhelm our modern, "rational" processes....

They are part and parcel of our uniquely human consciousness. However, that's not to say that a perfectly functional consciousness could not be fabricated without emotions. If we were programming such a consciousness, and we wanted some sort of "moral" component....At that point we could no doubt build it in....Something along the lines of Asimov's laws.

Should we get to that point....

Actually, this is not quite backward, but it's a bit sideways. I've been doing a lot of Cognitive Science recently (I'm actually quite serious about the steampunk-looking robots.)

The idea that there's this reason engine in humans and that emotion overrides it is completely wrong. I won't quite say that you're saying this, though, but it needs to be pointed out especially as the concept that Lakoff calls "objectivism" (nothing to do with Ayn Rand) is something that skeptics fall afoul of.

Emotion is necessary in brains in order to do logic in the first place. Without emotion, there are no inhibitory synapses, and without that, all you have is Hebbian learning, which might be good enough to run a lobster but not a human.

Machines don't need that, because NOT gates are cheap. However, emotions, or some analogue of them, turn out to be exceptionally good for controlling massively parallel systems. Furthermore, varying emotions are very good as a way of controlling a global search mechanism that can do saltations out of local minima. What makes it work so well is that the neural network is also programmable in the different emotional states.

In my system, I've already seen properties that remind me very much of depression, mania, and mixed states. They haven't caused many problems yet, but some day, I might have to put in an emotional mechanism to regulate it.
 
Following from the design of human brains, I believe it's very much possible to design a very complex neural net from several simpler, but highly specialized neural nets.

I was thinking that as I read the first part of your post.

I also wonder if you could make a hierarchal structure of neural nets: A lot of small nueral nets, that are then linked together into a larger net, and so on. I'm not familiar enough with them to say if that's even a meaningful distinction or statement, though.
 
... The major advantage of human thought over artificial neural nets is our ability to derive new information for existing data. Left to idle on it's own, it's hard to imagine that a machine is constructing new information, although it might be capable of discovering new relationships between data it has already stored...

How do you think we derive new information from existing data if not by discovering new relationships between aspects of that data - albeit at various levels of generalisation and abstraction (metaphor, analogy, etc.) ?
 
Let's find out. Tell us what you see here:
I see a balrog capturing two hobbits. The spiky bit in the middle is their hair, the extremities are their feet. Anyone else see that?

Could be a nazgul as well, really.


Regarding discussion of neural nets: frankly, we don't even know enough about the basic networking properties of the brain to be able to say how a cortex-inspired network "should" function. It's all people taking behavioural or incomplete gross activity data and shoehorning that into the capabilities of existing computers, then calling it good enough for now.
 
Last edited:

Back
Top Bottom