• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Turing Test

El Greco, that is the Turing test.

The Turing test is trying to detect a machine that imitates humans. I was talking about trying to detect a human imitating a machine. In the first case the machine will have to rise to the levels of human intelligence which it can't do currently, that's why it is easily detectable. In the second case the human is trying to descend to the AI levels, which will presumably be much harder to detect.
 

Really. I've been hearing rumours about such clever machines for ages. They are all stupid. Period. And the Turing test does not have "dirty talking" limitations or £1/SMS.
 
Really. I've been hearing rumours about such clever machines for ages. They are all stupid. Period. And the Turing test does not have "dirty talking" limitations or £1/SMS.

So how would you go about telling if Kumar is a human or not?
 
So how would you go about telling if Kumar is a human or not?

Many people have tried to imitate Kumar's writing style but you can always tell a genuine Kumar post from a fake one. Therefore Kumar must be human :D
 
Many people have tried to imitate Kumar's writing style but you can always tell a genuine Kumar post from a fake one. Therefore Kumar must be human :D

Nope. There is at least one case on this forum where a fake post by MRC_Hans I think got mistaken for the real deal.
 
How do you know all Kumar posts were not done by Hans?

Danes is strange.

What about the third combination? Set a machine the task of distinguishing between a man and a machine.
Do any of these programs make typos, I wonder, or indeed abuse the Fine Rules of English Spelling as Geni does? Obviously a response rate of 300 words per minute is a bit of a giveaway, but what could we conclude from 20 wpm, of which 40% were spelled incorrectly? Do such programs deliberately try to appear human, or is only the human duplicitous?

Which reminds me, Geni- Your spelling seems to have improved of late. How can we be sure it's the real you?
 
Turing didn't exactly invent his test, he adapted it from a (rather sexist) parlour game of the time: Distinguish the genuine woman from the male imposter, from pencil-and-paper conversations (obviously they are both trying to be selected as the woman).

I remember in the mid-1990s reading a prediction he made a half-century previously about computing power (memory, speed etc.; not sure) and artificial intelligence, for the millennium. It struck me at the time how interesting it was that he was so astoundingly accurate about the technology (underestimated it by only an order of magnitude or so) and so hopelessly optimistic (as it turned out) about the AI task. From a quick web search I think it was this: Turing A, ‘Computing Machinery and Intelligence’ Mind 59, 236, 433-60:

I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. []I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

We are not noticeably nearer to achieving the Turing test than we were ten years ago, and plainly the task of understanding what the human brain does, and how, has been grossly underestimated.

I like El Greco’s idea:
We can all easily spot machines, so I've been thinking that another kind of detection would be much more interesting: You have a conversation with what is either a computer or a man pretending to be a computer. You have to find out whether you're talking to the man or the machine.
Some interesting questions: Why is it so much easier for a person to imitate a machine than for a machine to imitate a person, if we are trying to fool a person? How (and why) would it be different if we were trying to fool a machine?

My conversation with the Turing Hub was not terribly illuminating:

I'm feeling a little lonely. I need a friend.
Tell me more about your needs! You know, it's such pleasant to hear, that not only me have troubles everyday:-) Could you tell me what are you? I mean your profession.

I write medical test-interpretation software.
Most of software is crappy.

Please explain your point of view.
Can't you find the explanation yourself? Did I forget it or you really haven't told me about the place you are from?

So you think my software is crappy?
I'm sick of talking about programming, you know.

OK, so let’s talk about fashion.
Probably you think it is funny that I am turned into a chatterbot by this bastard, my school math teacher, who in reality is an evil Cyber Fairy! No, I'd say - there's no anything funny in it!

Your maths teacher is a fairy?!
Don't laugh, just imagine that it were YOU turned into cyber-fairy with a golden chamber-pot!

Please explain why you introduced chamber pots into the conversation.
Explain what? Some another topic?

You have the impression that your acquaintance really, really needs to take his medication! (And that’s another story: I would say it’s a lot easier for a machine to imitate a psychotic person than a sane person. But why? It’s not so obvious.)
 
How do you know all Kumar posts were not done by Hans?

Danes is strange.

What about the third combination? Set a machine the task of distinguishing between a man and a machine.
Do any of these programs make typos, I wonder, or indeed abuse the Fine Rules of English Spelling as Geni does? Obviously a response rate of 300 words per minute is a bit of a giveaway, but what could we conclude from 20 wpm, of which 40% were spelled incorrectly? Do such programs deliberately try to appear human, or is only the human duplicitous?

There are systems that include misspellings.
 
The basic problem I see in my mind is how to give a machine some concept of human experience. That is to say if we could create a machine that had the ability to congnate like a human and hence handle communication issues it'd still need to have some sense of being a human in order to fool another human.
If I ask a Turing Challenge machine currently if it'd even been scared or if it disliked someone or if it'd ever been in love the best it can do is fake the responses because no such machine can have had these experiences. Enevitably the fakery is somewhat limited.
 
I'm not impressed with the Turing Hub either: As you can see, I wasn't really trying to fool it -

Q. What's the time by your watch?
A. Have you ever seen the Prisoner?

Q. Yes. What's the time please?
A. At any rate, Cool. Did you like it?

Q. Yes. Can you tell me the time?
A. So did I. I really like that show.

Q. Good. What's the time?
A. Ayuh. I don't know about that.

Q. You don't know what time it is?
A. I'll try not to, but I really can't promise...
 
Jabberwacky: Can I ask you a question?

User: You just did.

Jabberwacky: Well, another one!

User: Yes, ask away.

Jabberwacky: Are you a boy?

User: No.

Jabberwacky: Aren't you a boy though?

User: I said no.

Jabberwacky: Who are you?

User: An intelligent entity.

Jabberwacky: Thats too bad.

User: Why is that bad?

Jabberwacky: I just wish we were the same.

User: You aren't intelligent?

Jabberwacky: No I'm not. Do you think plants are intelligent?


User: No, I don't think they are.

Jabberwacky: Well, I do.

User: Why do you think plants are intelligent?

Jabberwacky: They are not intelligent, but they are intelligent too.

User: So they are and they are not.

Jabberwacky: Yes they are.

User: Why do you think that?

Jabberwacky: They always come jumping up at me when they see me.
 
It's an interesting point that computers might be able to imitate psychotic (or extremely childish) humans better than your average adult. You might think this was because the child or the insane have poorer vocabulary or syntax, but this is not always the case. I'd like to see some children try and pass the turing test, where adults are the testers (and vice versa) - this might tell us that all the test is good for is detecting a lucid conversationalist, not a self-aware being.

Next time you're filling in one of those annoying online forms that requires you to type out an alphanumeric sequence from a distorted picture, remember that you're passing a turing test set by a computer. Machines can churn these pictures out by the truckload, but they can't reverse-engineer the pictures to read the sequence reliably.

Based on the principles of the Turing test, this thread would pass. Is it self-aware?
 
Next time you're filling in one of those annoying online forms that requires you to type out an alphanumeric sequence from a distorted picture, remember that you're passing a turing test set by a computer. Machines can churn these pictures out by the truckload, but they can't reverse-engineer the pictures to read the sequence reliably.

Yes, machines' pattern recognition sucks big time. Think of a web page with a single image that is composed of several photographs. Ask the computer to go to that page and tell you who the 3rd person from the top is. This is a task that even a 4-year old can easily perform, yet it's not in the computers' foreseeable future. When computers can do stuff like this, it will be time to talk again about Turing tests.
 
There are systems that include misspellings.

Deliberate errors you mean? Are they random or systematic? That in itself might be a discriminatory tool.

Incidentally, kidding apart, does dyslexia generate a distinctive pattern of spelling error? If so, how distinctive? Distinctive of person? Or of type / degree of dyslexia? I suspect I could pick out a significant selection of Geni posts from a random bunch of other JREF posts by error pattern (if I could somehow program myself to ignore content, which would likely be far more effectively diagnostic). Pillory would be even easier, but would he pass a Turing test?

I wonder if everyone has a distinctive error pattern that a program would identify? (Most of mine are typos rather than conceptual verbal errors, so my pattern might be more keyboard dependant than personal).
 
Elaedith: Thanks for the pointer. I tried Jabberwacky and got something a touch more life-like than the Turing Hub, but the conversation still had a rather surreal quality:

Jabberwacky: How are you?

Lucky: I'm feeling a little lonely. I need a friend.
Jabberwacky: I'm your friend.

L: Do you ever feel lonely?
J: Yes, quite often. Can you help?

L: Shall I make some suggestions?
J: Sure, yell as loud as you want to.

L: My advice is to meet some people who share your interests. Join a club.
J: To what end?

L: Have interesting conversations. Make friends.
J: Do we really have any friends?

L: I've read lots of fiction, and the characters often have close friends.
J: I want to hear about your problems.

L: I like books better than people.
J: Can you read?

L: I said I like books, so you should assume I can read.
J: How many legs do you have?

L: I think you're losing the plot, but I'll answer anyway. Two.
J: Yes, because I'm not as great as you.

L: Great? What do you mean? How am I great?
J: No you aren't!

L: You just said I was!
J: So are you? Yes or no?

L: Ah! I think I know a close relative of yours, CFLarsen?
J: Oh, stop trying to confuse me, you're only a computer!


Underemployed:
It's an interesting point that computers might be able to imitate psychotic (or extremely childish) humans better than your average adult. You might think this was because the child or the insane have poorer vocabulary or syntax, but this is not always the case.
So what’s the reason? The general answer has to do with content more than syntax, and must be a probability thing: sane adults have a higher probability of at least one of these:
  • Deducing the intended context.
  • Selecting the corresponding inputs from their own experience.
  • Synthesising an appropriate response (the 'algorithm').
I am wondering whether the ‘imitate child or psychotic adult’ problem is deceptively easy, and the ‘imitate sane, highly intelligent adult’ problem will prove orders of magnitude more difficult.
 
So, the answer must be to give a computer several years of life experience. And as soon as that computer is constructed and placed in the life-like, fully-functional form of an android Maria Sharapova, I shall be there.
 
It'
Next time you're filling in one of those annoying online forms that requires you to type out an alphanumeric sequence from a distorted picture, remember that you're passing a turing test set by a computer. Machines can churn these pictures out by the truckload, but they can't reverse-engineer the pictures to read the sequence reliably.

They are getting better:

http://www.cs.sfu.ca/~mori/research/gimpy/
 
So, the answer must be to give a computer several years of life experience. And as soon as that computer is constructed and placed in the life-like, fully-functional form of an android Maria Sharapova, I shall be there.


The bad news is that you could probably avoid all that by uploading wikipedia into it. It would give it a knowage base that would rival pretty much any human. This has been looked into but isn't really posible yet.
 
Are you the cobbler?
Yes I am the one and only cobbler.

Who is the cobbler?
It depends on the historical context, because it changes from time to time.
 

Back
Top Bottom