• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence

Mercutio said:
.....I myself believe that consciousness and self-awareness are entirely fictitious concepts, and that our insistence on describing them is the modern equivalent of spiritualist descriptions of the soul.
Strangely enough I think I know what you mean and if what I think you mean is what you mean then I agree. :D

regards,
BillyJoe
(seriously though)
 
Originally posted by Mercutio

I myself believe that consciousness and self-awareness are entirely fictitious concepts, and that our insistence on describing them is the modern equivalent of spiritualist descriptions of the soul.
Maybe what it is is that consciousness is a trick the brain plays on itself, designed for a specific purpose: if I think I'm me, it gives me a basis for imagining what it is like to be you. In that sense, consciousness need not necessarily be viewed as any more (or less) fictitious than any of our other internal representations.
Originally posted by Suggestologist

So: Don't add yet another definition to consciousness -- making it even less useful as a word; just start over with a neologism.
Well said. So: you're the suggestologist; any suggestions?
 
BillyJoe said:
Strangely enough I think I know what you mean and if what I think you mean is what you mean then I agree. :D

regards,
BillyJoe
(seriously though)
Yes, he means what you think he means, and I agree with both of you.
 
Dymanic said:

Maybe what it is is that consciousness is a trick the brain plays on itself, designed for a specific purpose: if I think I'm me, it gives me a basis for imagining what it is like to be you. In that sense, consciousness need not necessarily be viewed as any more (or less) fictitious than any of our other internal representations.
"Other internal representations"? If by this you mean "thoughts" (as separate from thinking), "memories" (as separate from remembering, "id, ego, & superego" (as separate, distinct causal entities), then yes, it is on the same ficticious level. Each of these concepts is tremendously useful in the vernacular, but is no more "non-fiction" than the sun "rising" or the stars "coming out". Each of these is tremendously harmful to our understanding of ourselves when they are viewed as anything more than metaphors. We spend our time (and much of an entire branch of a science, as cognitive psychology shows) chasing the details of something that is, in truth, a fiction. Note that the first part of your quote above attempts to define some aspect of or use of consciousness--but it presupposes that consciousness exists as an entity that can be described.

And yes, AP and BJ, you both read my mind (see how handy a ficticious phrase is in everyday communication), and we can now split the million.
 
Suppose we have a simple sound system: a microphone connected to an amplifier connected to a speaker. The system's 'job' is to collect ambient sounds and reproduce them as faithfully as possible. Perfect performance would mean that the microphone collected everything in the ambient field of sound, and the speaker produced nothing that wasn't there (no artifacts: hisses, pops, whines -- things produced by the system itself).

But when a certain threshold of sensitivity in the microphone and power in the speaker output is crossed, some of the sound from the speaker gets collected by the microphone, and the results become wildly unpredictable -- what we refer to as 'feedback'.

I would ask: should these feedback effects be considered artifacts of the system...or legitimate features of the ambient field of sound?
 
Dymanic said:

I would ask: should these feedback effects be considered artifacts of the system...or legitimate features of the ambient field of sound?

First of all. no feedback will occur in this system until a sound happens to trigger it.

Feedback isn't unpredictable at all. The characterisics of the feedback will depend mostly upon the positioning of the microphone and speaker, and especially the distance between them; this sets up the frequency of the feedback. Once the loop has been established, the sonic qualities of the room also come into play.
 
Turn off the amplifier. Then, any sounds that were there still are, and any sounds that weren't aren't. If this solution isn't satisfactory, that means the goal of the system wasn't really to leave the ambient sounds untouched but rather to change them in some way. Without further clarification about the way they were intended to be changed, we can't decide if feedback is a problem or not.
 
Now why would you guys want to go and beat up on my nice, innocent little metaphor like that?

What happens in the system I described is that some of the sounds that are being collected by the microphone are outputs from the speaker which reflect an an earlier state of the ambient field of sound. These are added to the rest of the sounds being currently collected, then re-collected again, and so on. After a while, the feedback becomes the dominant feature of the speaker's output.

I'm suggesting that this is (loosely) analogous to something that happens in the human brain; that consciousness is a sort of feedback effect; a self-perpetuating cycle which, by constantly mixing current cognitive/sensory inputs with previous inputs/ouputs, results in a significant alteration of the overall state of the system.
 
Dymanic said:
Now why would you guys want to go and beat up on my nice, innocent little metaphor like that?

What happens in the system I described is that some of the sounds that are being collected by the microphone are outputs from the speaker which reflect an an earlier state of the ambient field of sound. These are added to the rest of the sounds being currently collected, then re-collected again, and so on. After a while, the feedback becomes the dominant feature of the speaker's output.

I'm suggesting that this is (loosely) analogous to something that happens in the human brain; that consciousness is a sort of feedback effect; a self-perpetuating cycle which, by constantly mixing current cognitive/sensory inputs with previous inputs/ouputs, results in a significant alteration of the overall state of the system.

LOL, hey, when something comes up that I actually know about, I have to grab the chance to tell it. :D

I thought you were going for a "random" result coming from an apparatus in a "known" state.

A better example for your metaphor is Jimi Hendrix. He (and many others) used feedback as an instrument in itself very much in the manner you describe, controlling it and using it in combination with the actual notes coming from the guitar to create brand new sounds.
 
Dymanic said:

I find that I experience a most unpleasant kneejerk reaction to this type of phraseology. I'm hoping that you weren't intending to suggest that such a search would be immoral, but merely a waste of time.

I would answer that of all the human problems we can't yet solve, the nature of consciousness is arguably the biggest. The search for artificial intelligence may go further than anything else we have ever done toward answering some of the most fundamental questions about exactly what it means to be human -- and this whether it succeeds or fails.

I am suggesting that such a search is neurotic; because people literally don't know what they're searching for.

Well, by human problems, I include such things as solving the smallpox problem, and apparently the SARS problem has been solved as well.
 
Originally posted by Suggestologist

I am suggesting that such a search is neurotic; because people literally don't know what they're searching for.
Yes, on a more careful reread of your earlier post, what you were saying appears more obvious than my unnecessarily testy response indicates a grasp of. Whatever intelligence is (if indeed such a thing can be said to exist) I don't appear to have as much of it as I sometimes like to imagine. My apologies.

I want very much to disagree with what you said, but I'm having a hard time finding anything to base an argument on. Part of my effort included looking up 'neurosis' in the dictionary: Any of various mental functional disorders characterized by anxiety, compulsions, phobias, depression, dissociations, etc. I have to admit, this seems like a reasonable description of some of the possible results of a search for understanding of the nature of consciousness.

Still, some pretty cool stuff has been stumbled upon by folks who didn't know what they were looking for (or who were looking for something else, or were looking in the wrong place, etc. -- Christopher Columbus is the first example that comes to mind.) Let's not forget that the science of chemistry basically got its start from some guys who were looking for a way to turn lead into gold. As I said above, sometimes when starting out, you don't know enough about a problem to know what to look for, or what questions to ask. How much of real interest are we likely to find if we limit our search for answers to those areas where we are already certain that the questions we are asking are valid? That's like just filling in the blanks on a prepared form.

In addition, if you look at the personal lives of a lot of the major contributors to understanding in any field, you find plenty of the neurotic symptoms mentioned above, perhaps even more prevalent than in the population at large (where they can hardly be said to be in short supply.) Maybe in our neurotic search for answers, we will stumble on keys to resolving our neuroses, including not only the one that drove us to such a search in the first place, but as an added bonus, some of the ones at the root of such things as substance abuse, racism, etc. (which might be enough to win the approval of even the most rigid pragmatist.)

Then we can all just go do our eight hours, come home, and sit drinking beer on the porch, laughing about the times when silly neurotic people went wasting their time on impractical, unanswerable questions. Life will be wonderful then...well, a little boring maybe, but ...easier.
 
Ok just to be an ass

"First of all. no feedback will occur in this system until a sound happens to trigger it."

This is not true. amplifiers that are uncoupled ( no input, unrestricted output) that have excessive gain will feedback spontaneously by either the mere physical position of its constituent parts or even the interelectrode capacitance of the components of the vacuum tube, remember , you sited Hendrix...only Marshall hotrodded 1958 plexis and 50w and 100w imports ( el-34's ) were ever used by the Master. *1

There are different kinds of feedback we are discussing, one is called parasitic feedback ( or oscillation )..it is basically the squeal you hear when someone turns a microphone up too high. The other kind is positional or control or sensory whatever you want to call it. It says that this arm ( of say a robot ) is here , or the force that is applied between two rollers that spit out sheet metal is this...If that quantity is low the force applied is more, if over , less...and in the next sample the test is done again.

Murcitio:" "Other internal representations"? If by this you mean "thoughts" (as separate from thinking), "memories" (as separate from remembering, "id, ego, & superego" (as separate, distinct causal entities), then yes, it is on the same ficticious level. Each of these concepts is tremendously useful in the vernacular, but is no more "non-fiction" than the sun "rising" or the stars "coming out"

Now see this is Exactly the kind of semantical meandering I was trying to avoid. To paraphrase......... "flight" as separate from flying , "action" as separate from acting, this has no bearing on wether or not we can design a technology that successfully acts "intelligently". So even the discussion of that indefinable quantity in regards to AI is counter-productive. (at this time)

I believe that consciousness is a thing that arises from the complex machinery of the brain. I think it is self-demonstritive and is a sort of "super-set" of the brain, similar to the Gaia Hypothosis, To state that it does not exist because we cannot define it or understand it is tantamount to declaring there can be no Grand Unified Theory for the same reasons. Kind of a fanatical skepticism, just as bad as the religious kind.

Mark Turner one of the authors of conceptual integration theory for AI : " I work on higher-order cognitive operations that distinguish human beings from other species and apparently emerge in the record of our descent during the Upper Paleolithic." So not only does this guy proclaim the uniqueness of the human animal and its consciousness but seems to draw a line in the sand as to when it occurs. He and Fauconnier read dry but are very informative and at the forefront of cognitive modeling. I hadn't followed his stuff for a few years until Suggestologist, eh-hem, suggested it here =)



P.S. Sundog saw your pic and it gave me an idea. I talked to Vaughn Bode's brother Mark about this being an excellent time to do a Cheech Wizard movie ala Pixar. He replied that he's working on a short and shopping the option on a film. That would be cool !!

*1 I work on tube amps as a sideline, restoration and hotrodding. I had a Marshall that someone had cascaded the gain stages as an overdrive. Problem was you would turn up the gain to 10'o clock and the damn thing would shut down...almost no output. Turns out it was feeding back at Radio Frequencys..decoupled the gain stage with a cap and a resistor at the plate and viola'
 
TillEulenspiegel said:
Ok just to be an ass

"First of all. no feedback will occur in this system until a sound happens to trigger it."

This is not true. amplifiers that are uncoupled ( no input, unrestricted output) that have excessive gain will feedback spontaneously by either the mere physical position of its constituent parts or even the interelectrode capacitance of the components of the vacuum tube, remember , you sited Hendrix...only Marshall hotrodded 1958 plexis and 50w and 100w imports ( el-34's ) were ever used by the Master. *1

Picky, picky, picky. :D

I was talking about an idealized situation where you have PERFECT silence in the room, a hypothetical "perfect amplifier" and NOTHING coming from the speaker. This arrangement won't feed back until triggered by SOME noise, however small.

P.S. Sundog saw your pic and it gave me an idea. I talked to Vaughn Bode's brother Mark about this being an excellent time to do a Cheech Wizard movie ala Pixar. He replied that he's working on a short and shopping the option on a film. That would be cool !!

*1 I work on tube amps as a sideline, restoration and hotrodding. I had a Marshall that someone had cascaded the gain stages as an overdrive. Problem was you would turn up the gain to 10'o clock and the damn thing would shut down...almost no output. Turns out it was feeding back at Radio Frequencys..decoupled the gain stage with a cap and a resistor at the plate and viola'

LOL, two great stories!

I'm gonna remember you when I want to hotrod an amp...
 
TillEulenspiegel said:
Now see this is Exactly the kind of semantical meandering I was trying to avoid. To paraphrase......... "flight" as separate from flying , "action" as separate from acting, this has no bearing on wether or not we can design a technology that successfully acts "intelligently". So even the discussion of that indefinable quantity in regards to AI is counter-productive. (at this time)
But this is exactly why we need, as AP suggested, to operationally define our target. One person's "intelligent" machine is another's abacus. I've seen machines that learn through trial and error, in a manner that I would call intelligent. Another person might look at the program, and for some reason say this was not "intelligent" learning, but (just to choose a word) imitative learning.

Can we do a lot of work that gets us to this undefined goal? Perhaps. I honestly cannot know until I know what the objections will be, and I don't know what those will be until I see the agreed-upon operational definition. And so the "semantical meandering" is really just part of the discussion of where we want to get to.
 
Originally posted by TillEulenspiegel

Now see this is Exactly the kind of semantical meandering I was trying to avoid. To paraphrase......... "flight" as separate from flying , "action" as separate from acting, this has no bearing on wether or not we can design a technology that successfully acts "intelligently". So even the discussion of that indefinable quantity in regards to AI is counter-productive. (at this time)
Ok, NOW I agree. In the search for artificial something-or-other (that peculiar property which must not be named, unique to the human brain) maybe we'll learn enough to know what to call it, and how to define it. Which leaves us still without terms we can safely use, which is likely to get awkward. If we end up having to abandon the 'artificial' distinction as well, we're going to be in real semantic trouble. This does not mean that the entire field of research must be abandoned, just that we are going to have a hard time making the results fit into the conceptual categories we are presently using. (Suggestologist, you were absolutely right; what we need is neologism.)
So not only does this guy proclaim the uniqueness of the human animal and its consciousness but seems to draw a line in the sand as to when it occurs.
Flogging on my feedback metaphor a little more, if consciousness (oops...sorry) is a type of feedback phenomenon, a system in which feedback is taking place differs dramatically from one in which it is not, and we would expect it to emerge rather suddenly once the threshold of sensitivity/power is crossed.

I feel like my mind has been expanded as a result of this discussion; I have some new reading to do, and some new thinking as well. If there is anyone within fifty miles of me that could have helped make that happen, I haven't met them yet (maybe I should get out more). Anyway, thanks all.
 
Why do we have to limit the concept of intelligence to the human model, my dog certainly has the ability to learn, remember and function in an intelligent fashion, is there not a progression in intelligence?

Sorry if I missed this earlier.
 
"But this is exactly why we need, as AP suggested, to operationally define our target. One person's "intelligent" machine is another's abacus. I've seen machines that learn through trial and error, in a manner that I would call intelligent. Another person might look at the program, and for some reason say this was not "intelligent" learning, but (just to choose a word) imitative learning. "


I agree, basically, there are machines that do one thing better then we will ever be able to do, I mean it doesn't matter how smart you are you will never be able to do math at the speed that a digital computer can ( with quantum computation in the wings....waiting ) a harvester will always outproduce a field worker. At the end of the day tho , the field worker can not only step in a car he recognizes as his own , but start it and drive it home. If the car breaks down the poor schmo hasta haul his ass out of the car and fix it, meanwhile back at the farm the combine is a 8 ton hunk o'crap. The human mind is a black box, it can learn just about anything, sift thru reams of fuzzy data and find a damn good approximation of reality. So who's smarter. Considering that the brain does Trig on the fly while your driving around a curve or making a turn and then your come home and make spaghetti while whistling a tune all the time thinking about that hot chick at work............................If thier is a god , I sure can't out design him.

Now look to a cat and her kittens, the seemingly cruel treatment she inflicts on a captured mouse, is not cruel , but a demonstration of how to hunt. The kittens see that play and mimic it to become successful hunters. Imitative learning...seems to be a good thing

Trial and error? My wolf hybrid wants out..She'll dig a tunnel after I got higher fences, after I made the latch on the gate foolproof and I assume if I make that impossible she will find another way

Dogs and cats here folks.
Humans? Anyone have kids? That is how we learn . Studies in the past 20 or so years state that the first few years of random stimulus and behavior is actually hard wiring the brain. That these strategies are exactly the ones needed to develop a functional brain with an awareness of the environment around it. Of course that represents a 17-18 year learning curve.

So If I had to develop a learning program that have a broad scope of capability it would be based on those known properties. If however I wanted to develop an expert system on say design parameters for nuclear reactor vessel construction, One could suppose it could be achieved in less then a year ( most of that time would be spent on developing the system not it's ability to function.).
Much of my take on human consciousness stems from a few people , Joseph Campell be one on the foremost, not just his take on the commonalities of creation mythos, but the grasp of the eastern mind in its explanations of consciousness. The delineation being such that theres a base level of organic consciousness where the stomach and intestines don't have to be cognigsent or aware to function or the cells to replace themselves, a higher level where we're aware of projected image (self ) and injury and grooming and yet a higher transcendental level where we have an awareness of the flow of time and our perceived duty to our fellow man and ..... yes and.
 
Dymanic said:

Still, some pretty cool stuff has been stumbled upon by folks who didn't know what they were looking for (or who were looking for something else, or were looking in the wrong place, etc. -- Christopher Columbus is the first example that comes to mind.) Let's not forget that the science of chemistry basically got its start from some guys who were looking for a way to turn lead into gold. As I said above, sometimes when starting out, you don't know enough about a problem to know what to look for, or what questions to ask. How much of real interest are we likely to find if we limit our search for answers to those areas where we are already certain that the questions we are asking are valid? That's like just filling in the blanks on a prepared form.

Much more is stumbled on by people who know where they want to go; such as the Moon. When the questions are basically meaningless (devoid of semantic substance), you will never be able to answer it until you formulate an answerable question. Sure, there should be times when you're not sure -- those times give you a chance to appreciate new possibilities; but those should not overshadow the times you know where you want to go; or you'll never get anywhere.
 
Originally posted by TillEulenspiegel

But this is exactly why we need, as AP suggested, to operationally define our target. One person's "intelligent" machine is another's abacus.
Ok, I'm ready to take a tentative stab at it. I found this quote on my desktop, but can't remember where it came from (Dennett probably):
An intelligent system is one which can notice patterns in whatever data is has access to, to use those patterns to support inferences, and most particularly, use them to build more patterns.

[I would add: and to extract from those patterns their essential structure; the 'inferences' being good guesses as to what rules were used to create the patterns, and which of those rules should be considered expendable, and which indispensible, upon transporting the pattern to a different set of data (particularly a different type of data)]

So far, most of our computer programs don't notice new patterns at all. They react to patterns, sometimes complicated ones, with patterns of behavior which may also be complex. But all these patterns are given a-priori, by the programmers. The programs don't discover them. They aren't intelligent, even to the limited degree that an animal might be said to be intelligent. This lack of intelligence is not because they are programs, or machines. It is because they are machines designed to act in accordance with given patterns, not to discover patterns for themselves.
TillEulenspiegel again

theres a base level of organic consciousness where the stomach and intestines don't have to be cognigsent or aware to function or the cells to replace themselves
Extract from Barbara McClintock's Nobel Prize Lecture, 1983
"A goal for the future would be to determine the extent of knowledge the cell has of itself and how it utilizes this knowledge in a 'thoughtful' manner when challenged"

What's driving me crazy about all this is the way things seem to 'flicker' between subject and object; between code and data; between patterns -- which can be recognized and used as the basis for forming new patterns -- and the results of such a process. But this flicker phenomenon is a feature of the way my thinking approaches the problem; in the systems themselves, the dual states must exist simultaneously.

The chemical state of a cell determines what activities take place in the cell, but those activities are not intentional in the sense that they are explicitly designed for the purpose of creating patterns which then act as the basis for further actions; the cell is simply attending to its needs -- but in so doing, it is simultaneously creating further 'instructions', and in attending to its needs, it must also 'take this into account'. It's like a recursive function call that never bottoms out, in a system with infinite stack space. The difference between the cell's 'self-knowledge' and its 'utilization' of that self-knowledge is indistinguishable.

Is this making any sense at all?
 
Dynamic:"An intelligent system is one which can notice patterns in whatever data is has access to, to use those patterns to support inferences, and most particularly, use them to build more patterns...."

I thought I posted it in this thread but I guess it was another on AI or consciousness, my own take : An expression of an intelligent system would be it's ability to take known behaviors or rule sets and apply them to a novel situation, autonomously. I don't usually quote myself but I think that a distilled version of what's accepted. In other words the Mars rover Sojourner, pulls up to a rock bumps into it and can "decide"to back up, go forward turn either direction..now say (hypothetically) the things learning program was only exposed to obstacles that had regular surfaces. ( say like building block kind of objects ) can it deal successfully with an irregular object as well as a square? Sounds deceptively simple but not only is it applicable, it actually occurred.( some guy with a joystick at Kennedy "drove" it around the rock ). The rest of the quote seems a little dated tho as there are many expert systems and autonomous agents in use.

I should have been more carefull in my use of the word consciousness in re the organic level. The allusion I was aiming for is that as single cells all are complete organisms, stomach, liver cells skin cells which can live alone in nutrient solution and perform as an complete animal, but put them together in vivo and they suddenly interconnect with other cells and cell types and become a synergistic organism. One which functions "intelligently". The energy level is low, the mouth chews, the tongue finds the biomass acceptable, the smooth muscle tissue funnels down the mass too the stomach which produces acid and informs the liver to make enzymes and the pancreas to produce insulin...woa pancreas, back off , not enough blood sugar ect.ect....
So as you view this process you see that the mechanism is greater that it's constituent parts. Altho these processes go on ( seemingly ) in the background. I guess this behavior ( more on the macro level then celluler) could be called regression, or trial and error or successive approximation , the prosses and outcome remain the same..... site the ideal condition, adjust for all peramiters that are deviant, examine results, compare with ideal condition, rinse and repeat =)

Dynamic:"What's driving me crazy about all this is the way things seem to 'flicker' between subject and object;..."

Aye theres the rub, The flicker is the thing that limits us in the application of finite ,concrete rule sets to describe reality, one man's green is another man's blue. All objects have subjective characteristics. That's why I let a word like transcendental and a dialog by two SF film actors creep into my conversation.I hate when people use semantics as a dodge for understanding or proof, but there are elements in our discussion that connote be approached in any other way. If I say the word box , you envision a (your ) mental representation, we have exchanged a compleat concept in one word and unless one of us is a mental defective we could transpose the mental images we perceive as "box" and we would probably agree that they match. A small 8 bit computer can do that. Type the word box, it will show a box. When we get to the stage of me asking you to go in a room full of different size and color boxes and pick up the small yellow one, the level of complexity exponentially increases, for you it takes seconds, for a program depending on the processor speed and the skill of the programmer it could take seconds to minutes... that's a virtual eternity to a machine running at terra flops ( trillions of floating point operations per second).The fact is the cognitive model that seems to be the most accepted method of approximating human cognition stems from Turners research of language ( he is a doctor of English ), specifically the metaphor, one of the most slippery, fuzzy things to try and tackle but ultimately based around your flicker and hopefully a way to approximate the human outlook in machines. And if it adds any solice your craziness is the driving force behind endeavors to understand the approximations of human thought and not a testament to a lack of depth of understanding on your part.
 

Back
Top Bottom