• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence

Ah, that all depends on your definition. And definitions vary quite a bit. If you mean human-level reasoning, we're not even close, and I don't think we can ever get there - I think we're too stupid to fully understand the complexity of our own brains

But some definitions are quite a bit broader, including much simpler machine-based decision making. And if your definition is sufficiently broad, then yes, we have artificial intelligence all over the place, including video games. But I don't think anyone's ever come up with a really definitive way to differentiate what, say, a computer controlled video game opponent does and what a human does, other than by degrees. Not to say that the difference is insignificant, but if all there is between us and simple machine algorithms other than a vast (HUGELY vast) difference in magnitude, then regardless how big that difference is, you're not really going to reach consensus on where in the middle of all that to stick your boundary between what counts as intelligence and what's just simple algorithms. Maybe there is some other fundamental difference besides just complexity, but nobody's been able to pin one down definitively. But this may just be our our stupidity not being able to recognize the fundamental difference, it's not necessarily an indication that there is none.
 
If the question is, "Hey, what about all the hype about AI ten or fifteen years ago, has that led to anything?" the answer is: Nope. Dead end. No appreciable progress in years that I've heard of. Computer scientists are a bit depressed about this.
 
Ziggurat said:
If you mean human-level reasoning, we're not even close
This would depend on the human.

I've met a few (on this board even) with the cognitive abilities of early versions of Lisa.

And pillory is a poor imitation of Racter, come to think of it.
 
In my AI class we did a funny comparison, purely informal.. First we had a chat with Eliza (she's out there somewhere, if you have emacs, you should be able to run her through that, otherwise, well the web is a fabulous thing). Then we headed over to chatbot at Steven Spielberg's AImovie.com or some such thing. What we witnessed was fifteen or so years of progress in conversational AI. We've got a few years of research still ahead of us, I think...
 
One approach to AI is the 'top down' approach, an attempt to reverse-engineer human intelligence through reductionist methods. Most computer scientists have abandoned this approach, if not the entire undertaking, as hopeless; but philosophers, neurologists, and psychologists have all had a crack at this. I agree with Sundog and Ziggurat that this approach does not appear likely to produce impressive results anytime soon; the closest thing to progress so far is a better appreciation of the magnitude of the problem.

Another approach is what might be called the 'bottom up' approach, an attempt to jumpstart a process which, running on its own could result in the emergence of something we might call intelligence (defining that, btw, is itself not a trivial part of the problem). I think this is where our best hope lies.

The computer scientists who favor this approach are dug in for the long haul. On this frontier, progress is measured in rather smaller increments than is common elsewhere; yet these brave pioneers press on.
 
When visiting Pittsburgh once, I saw the fantasic Carnegie Mellon University where they spend millions of dollars to teach a computer to play chess. Then I went downtown, and during rush hour they had the traffic lights controlled by a police officer standing there by a control panel pushing a button.

I don't think the work done in AI has had much to do with actual thinking. Thinking has to have some value or quality to it other than just proving something can be done. But that's just what I think.
 
(I'm a little skeptical of the following use of reasoning...)

If the human brain works in a way where everything (even consciousness) is expainable by natural phenomena and matter, why shouldnt we eventually be able to build computers that recognize there own existence. Hey, the human brain has been carefully crafted by millions of years of evolution, computers have been around for just over a half century, who's to say we wont have sentient automotons in 3.5 million years (or at the rate computers are going now, a few hundred years).
 
Originally posted by Yahweh

(I'm a little skeptical of the following use of reasoning...)

If the human brain works in a way where everything (even consciousness) is expainable by natural phenomena and matter...
That consciousness is dependent on natural phenomena and matter can be easily demonstrated with a baseball bat. Explainable is something else again.
... why shouldnt we eventually be able to build computers that recognize there own existence.
It's probably fair to say that most of us are at least a little skeptical about this, but if it is possible, I'd say your time frame is more realistic than what some in the early days proposed.
 
I have played with AI for a while starting in the 80's and the most robust systems that we have developed are some automata that can mimic simple behaviors (fight/flight, feed, dedicated action) either through massive rule sets or thru adaptive/learning rule sets(neural nets). Processor power and economy of memory both in terms of size and power consumption have followed Moore's law quite accurately. The problem is our understanding what the I in AI is.....This seems to still be in the realm of the philosophical, the dinner conversations of people like Steven Gould. The good thing? We have learned much about the biological processes that support the mechanism of real world recognition and feedback with the use of specialized sensors, so we can gather the information accurately. We have extremely fast , efficient processors that can process the information any way we want to . (especially with hybrid ASICs ...Analog and digital on the same chip with a devoted function....,say pattern recognition). The gap is in integration..not in the systems sense but in the cognitive sense. Theres a thing in biology called "Binding", that is the processes of taking disparate streams of data and integrating them to a single outcome or idea. Works like this you hit your hand with a hammer, you see it happen, you hear it happen, you experience pain. All of these separate events occur as different mechanisms in the body at different processing speed in the brain, yet the cognitive reality is it was a single event. Every grade school kid knows that the image at the back of the eye is actually upside down, we can do that with any graphics program but the integration of all these approximations of different sensory apparatus at different times integrated into a single, actualized , understandable concept for the brain to react to , I believe is the hairy point. That's only the recognition half of the equation, after that comes action. My opinion is that is simpler ...but hey who knows?

Many people muddy the subject further by introducing the concepts of consiouness and self-awarenessess. For an automata to be successful it only needs a program that sets goals and deals with novel situations in regards to those goals ( obviously the construct must be robust enough to deal with its environment ) it does not have to self motivate or procreate or even write poetry ( unless that whats it's supposed to do ) , so consciousness need not be addressed as I am of the opinion it is not needed. Self awareness is another issue and unless you are of the mind ( lack of?) to argue silly things like ,"how do you KNOW the ice is cold", or "does this arise in the mind or is the color..." the self awareness can be anything as complex as " I knew I shouldn't have had that last drink, ow my head"..to a Cadillac checking its tire pressure,oil pressure,coolant ect. Even that rudimentary behavior fills the definition. The computer your reading this on does it , it's called POST..Power On Self Test, the computer is "aware" of it's required default of it's parameters to perform correctly and checks itself to make sure it meets that criteria. Not conscious just aware (as : to have knowledge of ) , many confuse the two.

On a last note ( this is sure to bring out the kooks ) some of the concepts we try to tackle in AI are by thier very nature stilted more towards philosophy then computer science, in fact this highly technical field tends to defuse more the harder it tries to whittle down the information it takes to achieve it's objectives.that's ok tho That's the process in most profound, fundamental inquires. The question in my mind is wether we run into an epistemological brick wall when we try to study intelligence and consciousness at thier base levels.-It reminds me of The Matrix film ,Neo and Cyphers dialog when they're discussing the monitor Cyphers watching and everything's in code. Neo: Do you always look at it encoded? Cypher: Well you have to. The image-translators work for the Construct-Program. But there's way too much information to decode the Matrix..
In other words is the thing that gives rise to us being able to ask such questions able to understand itself at such a fundamental level, of basically what amounts to self-analisys?
 
Originally posted by TillEulenspiegel

Many people muddy the subject further by introducing the concepts of consiouness and self-awarenessess...

...consciousness need not be addressed as I am of the opinion it is not needed.
I can't say I agree.

I think precisely what makes the problem so intractable is that intelligence and conscious self-awareness cannot be so cleanly separated. If sophisticated information processing is all we're looking for, then we're there now. But when considering the prospects for what has been called 'strong AI', what we are looking for in something we can acknowledge as intelligent is evidence of an inner life; the feeling that not only are the lights on, but somebody's home. Some degree of ability to create (or at least appreciate) poetry, art, and music would be a reasonable part of this expectation, along with an ability to 'get' jokes, follow nuances of sub-plot in a story, or parse language rich in double-entendre and subtle innuendo....

...or whatever might be correlates to those things in the environment in which such an intelligence exists...
...which would, by its very nature, be very different from ours...
...which might make it very difficult to gauge the point at which such an intelligence had in fact emerged.

The question in my mind is wether we run into an epistemological brick wall when we try to study intelligence and consciousness at thier base levels.
Nevertheless, some are going to continue to try to study these things. They may not have hit a wall, but they certainly have at least entered a bog where the going is slow at best, and there are many opportunities to become mired.
 
TillEulenspiegel said:
The problem is our understanding what the I in AI is.....This seems to still be in the realm of the philosophical, the dinner conversations of people like Steven Gould.

This particular problem is an illusion; the illusion that words have concrete meanings, when really, words can only have meanings within a context. Intelligence to a mathematician is being able to create new mathematical proofs, Intelligence to a musician is being able to create music that affects the audience. Who was it that said something like "Neurosis is falling in love with an unanswerable question." The neurotic, unanswerable question in AI is: "What is intelligence." Intelligence is an overused word, that has little power to specify what a person actually wants to communicate either to others or to themselves.

We shouldn't be searching for "Artificial Intelligence". We should be looking for computer programs (or circuitry), that can solve human problems that we can't yet solve, and find new potential problems before they become problems for us humans.

The gap is in integration..not in the systems sense but in the cognitive sense. Theres a thing in biology called "Binding", that is the processes of taking disparate streams of data and integrating them to a single outcome or idea. Works like this you hit your hand with a hammer, you see it happen, you hear it happen, you experience pain. All of these separate events occur as different mechanisms in the body at different processing speed in the brain, yet the cognitive reality is it was a single event.

Have you ever looked into Conceptual Integration Networks, Fauconnier's stuff?

The question in my mind is wether we run into an epistemological brick wall when we try to study intelligence and consciousness at thier base levels.-It reminds me of The Matrix film ,Neo and Cyphers dialog when they're discussing the monitor Cyphers watching and everything's in code. Neo: Do you always look at it encoded? Cypher: Well you have to. The image-translators work for the Construct-Program. But there's way too much information to decode the Matrix..
In other words is the thing that gives rise to us being able to ask such questions able to understand itself at such a fundamental level, of basically what amounts to self-analisys?

Most human functioning is in unconsciously rigid, automatically recurring patterns (mental sets, motor sets, heuristic strategies, etc). Consciousness is limited to a small portion of mental processing; conscious control is too slow for normal functioning. Consciousness doesn't require knowing how it perceives, only what it perceives.
 
Originally posted by Suggestologist

We shouldn't be searching for "Artificial Intelligence".
I find that I experience a most unpleasant kneejerk reaction to this type of phraseology. I'm hoping that you weren't intending to suggest that such a search would be immoral, but merely a waste of time.

I would answer that of all the human problems we can't yet solve, the nature of consciousness is arguably the biggest. The search for artificial intelligence may go further than anything else we have ever done toward answering some of the most fundamental questions about exactly what it means to be human -- and this whether it succeeds or fails.
 
Dynamic : "I can't say I agree."
Well the question I was answering was kinda the current state of AI. We will I'm sure find ourselves at the juncture of wanting to incorporate the qualities that consciousness represent, I just don't think they are needed OR doable right now.
Suggestologist: "Consciousness doesn't require knowing how it perceives, only what it perceives"
True , but in order for Us to create or successfully mimic that thing rests inherently on our ability to understand it.
It sounds on its face like pretzel logic ( "depends on what your definition of is, is") but in order to build a true Quantum representation ( model) of the universe, many insist you must create a whole other identical universe.
I believe the only to way to create a complex conscious entity that compares to a human brain..is to build a human brain and we cannot build something that complex, we don't even have a firm grasp of the definitions used to discrib e it's behavior. Nature's R&D department took millions of years to develop that one, I think it will take us a few more years to figure it out.
;)
 
uneasy said:
When visiting Pittsburgh once, I saw the fantasic Carnegie Mellon University where they spend millions of dollars to teach a computer to play chess. Then I went downtown, and during rush hour they had the traffic lights controlled by a police officer standing there by a control panel pushing a button.

I don't think the work done in AI has had much to do with actual thinking. Thinking has to have some value or quality to it other than just proving something can be done. But that's just what I think.
There is a problem in people's perception of AI. If a task can be broken down into parts and successfully carried out by a computer with an algorithm, people say "that's not really 'thinking', it's just following a recipe". No matter how far AI advances, what has been done gets discounted because people believe that "thinking" is somehow magical.

Computers can do wondrous things. Modern cars would get nowhere without them. They can autonomously fly airplanes, and it would be impossible to design current integrated circuits without the help of computer programs.

Before trying to involve "consciousness" and "self-awareness", they should be defined, and I suggest it should be an operational definition. If a computer program can do "X", it is conscious. The Turing test is an example, that was an operational way to define "intelligence".
 
arcticpenguin said:

Before trying to involve "consciousness" and "self-awareness", they should be defined, and I suggest it should be an operational definition. If a computer program can do "X", it is conscious. The Turing test is an example, that was an operational way to define "intelligence".
I would just like to add that these terms are contentious even when not applied to artificial intelligence. A quick trip to the R&P forum would show that we have no common definition of consciousness or awareness in ourselves or others, let alone in machines. I don't know if we could ever come up with a definition of these concepts that satisfies everybody; even an operational definition (necessarily more narrow) will be a tough sell. (Just as one example, I myself believe that consciousness and self-awareness are entirely fictitious concepts, and that our insistence on describing them is the modern equivalent of spiritualist descriptions of the soul. I also know that there are others here who vehemently disagree with me. I respect their right to be wrong.) :D
 
Originally posted by AP

Before trying to involve "consciousness" and "self-awareness", they should be defined, and I suggest it should be an operational definition. If a computer program can do "X", it is conscious. The Turing test is an example, that was an operational way to define "intelligence".

A programming project usually begins with a description of the problems the program will be designed to solve, an outlining of the major steps this will involve, and a sketching of the program's basic structure based on that. This all takes place before any actual coding begins. That can be a problem, because what sometimes happens is that you don't understand a problem until you've had a go at solving it. If you've structured the program based on bad assumptions, you may end up scrapping most of it and starting over.

I'm not sure we know enough yet about the nature of consciousness to form valid operational definitions for it. I'm thinking this might become clearer as we go along, as the results of AI research are added to those from psycology, philosophy, neuroscience, etc.
 
I mentioned this on the "spiders" thread. But if Von Neumann's Chain is an accurate description of reality, then I doubt we will ever build a true, self-aware AI. On the other hand, if it is not an accurate description, then we probably will some day. Either way, it would have a major impact on how we view ourselves and our place in the universe.

I personally have no idea which way it will go.
 
Mercutio said:
I would just like to add that these terms are contentious even when not applied to artificial intelligence. A quick trip to the R&P forum would show that we have no common definition of consciousness or awareness in ourselves or others, let alone in machines. I don't know if we could ever come up with a definition of these concepts that satisfies everybody; even an operational definition (necessarily more narrow) will be a tough sell. (Just as one example, I myself believe that consciousness and self-awareness are entirely fictitious concepts, and that our insistence on describing them is the modern equivalent of spiritualist descriptions of the soul. I also know that there are others here who vehemently disagree with me. I respect their right to be wrong.) :D

Like intelligence, consciousness has become overloaded with too many different meanings, making it of little use to someone who wants to communicate a message to another person. Often, when this happens, people come up with new tokens (words) to represent what they're talking about; since the new token is more distinct than the old token, it has more communicative power.

So: Don't add yet another definition to consciousness -- making it even less useful as a word; just start over with a neologism.
 

Back
Top Bottom