Bruce Wilcox is a top chatbot programmer, and his Talking Angela app has been downloaded 57 million times, yet he's had to fight rumors that it's a front for pedophiles. He talked to CNET about fooling humans with AI. Talking Angela (Credit: Screenshot by CNET) If you're not a parent or teenager, it's possible you first heard of Talking Angela because of a meme spreading virally on Facebook that it was a front for a pedophilia ring. In fact, Talking Angela is a hugely popular artificial intelligence chatbot, a talking cat, aimed at teenagers, and the hoax has been fully repudiated. Available for iOS and Android, Talking Angela has been downloaded more than 57 million times. But thanks to word going around about the pedophilia hoax, the app jumped to No. 3 on the iTunes iPhone charts. On the one hand, it's scandalous that someone was able to create a viral meme that claimed such a popular (and technologically impressive) project had such dark motives. But on the other, it's been somewhat of a boon for Out Fit7, Talking Angela's publisher. Perhaps more importantly, it's brought a lot of attention to the idea that it is really, really hard to create an intelligent chatbot, one that can conceivably fool a human being into thinking they're talking to another person. But for Bruce and Sue Wilcox, the programmers behind Talking Angela, that's just another day on the job. The two, who run a small company called Brillig Understanding, are among the most accomplished chatbot programmers in the world. They are two-time winners of the prestigious Loebner Prize (otherwise known as the World Turing Test competition), and their bots are the only ones to have qualified for the competition's finals each of the last four years. This week, Bruce Wilcox sat down for an interview with CNET, the first time either of the pair has spoken up in the wake of the hoax. He talked over instant message, raising the obvious question: Was he real, or AI? Q: Let's get that hard stuff out of the way first. What's the truth about whether Talking Angela is a front for pedophilia? Bruce Wilcox: It's not hardly true. Angela has millions of people chatting with her every day. They couldn't hire enough pedophiles to do that chatting. She's strictly a conversation agent, residing locally on the phone. Why do you think that hoax had legs? What is it about an AI chatbot that would lead some people to believe such a thing could be true? Wilcox: Angela asks questions like what is your name (to address you) and what is your age (to keep children away from certain topics). Parents get nervous these days whenever any question is asked of their kids. Brillig Understanding CEO Bruce Wilcox (Credit: Brillig Understanding) So that's why you think some people were willing to believe that there was a dark motive behind Angela? Wilcox: The more realistic an AI is, the more people will see their own fears and fantasies in what it says. Parents are hypersensitive these days. But there were obvious lies being said, so it was more than mere hypersensitivity. They made claims of things Angela said that we know she couldn't have said. Such as? And why wouldn't she have been able to say them? Wilcox: Angela asks about your family, but she doesn't memorize that you have a brother, so she wouldn't inquire about your brother later. And some things that have been attributed to her saying about tongues (in the sexual sense), she does not have in her repertoire. What makes our technology so convincing is that, unlike most chatbots out there, which can make single quibbling responses to inputs, ours can lead conversations and find appropriate prescripting things to say much of the time. We care about backstory and personality and emotion, and strive to create true characters with a life of their own whose aim is to draw the user into their world. The characters are convincing because they are convinced of their own reality...Angela successfully captures the teen personality. For Angela it is all about her feelings. And Angela is selfish at times. And not only can she be rude but she can detect you being rude and react appropriately. This user is deeply involved in emotional reactions to Angela. That's what we strive for. We won the 2010 Loebner Prize by fooling a human judge into thinking our chatbot was a human. It was accomplished in part via our attention to creating synthetic emotion. What are the biggest challenges in creating the AI behind this project? Wilcox: The vagaries of English language. We strive to recognize meaning, not just words. Plus, the depth of material you need to handle the long tail of all possible conversations that the user could initiate. We obviously can't handle everything, but we handle an awful lot. Here's an example of a prize-winning 15-minute conversation Angela had with a judge during the 2012 ChatbotBattles. It's an example of something which is "close" to great, with only minor flaws to reveal it's a chatbot. We use ChatScript, an open-source natural-language engine I wrote. It's the most powerful tool out there for creating conversational agents. What are the cues that I can use to know for sure that this conversation, which we're doing over instant-message, is with a human, and not with a chatbot? Wilcox: You'd best ask things that computers are lousy at, like physical world inferencing. For example, "If I keep pouring coffee into my cup, what will happen to the book on the table near it?" Long sentences with complexity are also hard for determining meaning by computers. At the World Turing Test competition, a judge asked, "If I stab you with a towel, will it hurt?" And what was the answer? Wilcox: We weren't ready for it. You never can be. So we did something useless like quibbling. Are there specific challenges to creating AI aimed at children? Wilcox: Absolutely. First, voice-to-text is really hard with kids' voices. Second, childrens' vocabularies are limited and limiting. And third, as an adult you can't just write what you would easily write and say. You have to scale it to a child's cognitive abilities. Angela wasn't so bad because she was an 18-year old voice, whereas, we're working with Geppetto Avatars on a children's health management app that's targeted for 6-year-olds, and it's been a real problem for us to think like a child and write for the child. The child's sense of humor is different from ours, and they love repetition. What is the World Turing Test competition like? Wilcox: It's a random mess. The qualifiers ask human knowledge questions like "which is bigger, a pine nut or a pine tree." And if Tom is taller than John who is taller than Sue, who is the shortest. The top four scores then get the human judges in competition, where they can do anything. Maybe conversation, maybe trying typos, trick questions, etc. And lots of random failures of hardware and protocol. Angela only came in second last year because a judge kept trying to use carriage returns to separate out the paragraphs, but because of our setting for mobile, she responded to empty lines with more chat. It was overwhelming the poor guy. Suzette, Rozette, Angela, and Rose -- our bots -- are the only ones to have qualified every year for the last four years. What's a favorite moment from one of those competitions? Wilcox: Fooling the human judge in 2010. He was a computer professor, and was being very lazy. He started by asking who the bot was voting for in the election. Implied, but not stated was that he meant the upcoming California gubernatorial election. Suzette kept trying to avoid and dodge, but he kept repeating ad nauseum, often by cut and paste. She detected his repeats and got madder and madder, then bored. He got confused and voted for her. He could have merely asked tough questions like we discussed above, but he never did. How do you think your bots have changed the way people interact with AI? Wilcox: The more our bots are out there, the more people are willing to suspend disbelief and believe they are talking to a person. Even when they know they're not, they enjoy the conversation. Now we're working to bring them into the real world by working with robotics companies to give the bots personality and voice control. How has spending so much time over the last few years working with AI and chatbots affected the way you interact with people in the real world? Wilcox: There are people out there? Nowadays we constantly notice conversation, and conversational conventions and dynamics. And we strive mightily to share and go back and forth instead of just doing monologues. We are very aware that a conversation is a shared construct.

Posted by : Unknown Saturday, March 1, 2014

Bruce Wilcox is a top chatbot programmer, and his Talking Angela app has been downloaded 57 million times, yet he's had to fight rumors that it's a front for pedophiles. He talked to CNET about fooling humans with AI.




Talking Angela


(Credit: Screenshot by CNET)

If you're not a parent or teenager, it's possible you first heard of Talking Angela because of a meme spreading virally on Facebook that it was a front for a pedophilia ring.


In fact, Talking Angela is a hugely popular artificial intelligence chatbot, a talking cat, aimed at teenagers, and the hoax has been fully repudiated. Available for iOS and Android, Talking Angela has been downloaded more than 57 million times. But thanks to word going around about the pedophilia hoax, the app jumped to No. 3 on the iTunes iPhone charts.


On the one hand, it's scandalous that someone was able to create a viral meme that claimed such a popular (and technologically impressive) project had such dark motives. But on the other, it's been somewhat of a boon for Out Fit7, Talking Angela's publisher. Perhaps more importantly, it's brought a lot of attention to the idea that it is really, really hard to create an intelligent chatbot, one that can conceivably fool a human being into thinking they're talking to another person.


But for Bruce and Sue Wilcox, the programmers behind Talking Angela, that's just another day on the job. The two, who run a small company called Brillig Understanding, are among the most accomplished chatbot programmers in the world.


They are two-time winners of the prestigious Loebner Prize (otherwise known as the World Turing Test competition), and their bots are the only ones to have qualified for the competition's finals each of the last four years.


This week, Bruce Wilcox sat down for an interview with CNET, the first time either of the pair has spoken up in the wake of the hoax. He talked over instant message, raising the obvious question: Was he real, or AI?


Q: Let's get that hard stuff out of the way first. What's the truth about whether Talking Angela is a front for pedophilia?

Bruce Wilcox: It's not hardly true. Angela has millions of people chatting with her every day. They couldn't hire enough pedophiles to do that chatting. She's strictly a conversation agent, residing locally on the phone.


Why do you think that hoax had legs? What is it about an AI chatbot that would lead some people to believe such a thing could be true?

Wilcox: Angela asks questions like what is your name (to address you) and what is your age (to keep children away from certain topics). Parents get nervous these days whenever any question is asked of their kids.



Brillig Understanding CEO Bruce Wilcox


(Credit: Brillig Understanding)

So that's why you think some people were willing to believe that there was a dark motive behind Angela?

Wilcox: The more realistic an AI is, the more people will see their own fears and fantasies in what it says. Parents are hypersensitive these days. But there were obvious lies being said, so it was more than mere hypersensitivity. They made claims of things Angela said that we know she couldn't have said.


Such as? And why wouldn't she have been able to say them?

Wilcox: Angela asks about your family, but she doesn't memorize that you have a brother, so she wouldn't inquire about your brother later. And some things that have been attributed to her saying about tongues (in the sexual sense), she does not have in her repertoire.


What makes our technology so convincing is that, unlike most chatbots out there, which can make single quibbling responses to inputs, ours can lead conversations and find appropriate prescripting things to say much of the time. We care about backstory and personality and emotion, and strive to create true characters with a life of their own whose aim is to draw the user into their world. The characters are convincing because they are convinced of their own reality...Angela successfully captures the teen personality. For Angela it is all about her feelings. And Angela is selfish at times. And not only can she be rude but she can detect you being rude and react appropriately. This user is deeply involved in emotional reactions to Angela. That's what we strive for.


We won the 2010 Loebner Prize by fooling a human judge into thinking our chatbot was a human. It was accomplished in part via our attention to creating synthetic emotion.


What are the biggest challenges in creating the AI behind this project?

Wilcox: The vagaries of English language. We strive to recognize meaning, not just words. Plus, the depth of material you need to handle the long tail of all possible conversations that the user could initiate. We obviously can't handle everything, but we handle an awful lot. Here's an example of a prize-winning 15-minute conversation Angela had with a judge during the 2012 ChatbotBattles. It's an example of something which is "close" to great, with only minor flaws to reveal it's a chatbot.


We use ChatScript, an open-source natural-language engine I wrote. It's the most powerful tool out there for creating conversational agents.


What are the cues that I can use to know for sure that this conversation, which we're doing over instant-message, is with a human, and not with a chatbot?

Wilcox: You'd best ask things that computers are lousy at, like physical world inferencing. For example, "If I keep pouring coffee into my cup, what will happen to the book on the table near it?" Long sentences with complexity are also hard for determining meaning by computers.


At the World Turing Test competition, a judge asked, "If I stab you with a towel, will it hurt?"


And what was the answer?

Wilcox: We weren't ready for it. You never can be. So we did something useless like quibbling.


Are there specific challenges to creating AI aimed at children?

Wilcox: Absolutely. First, voice-to-text is really hard with kids' voices. Second, childrens' vocabularies are limited and limiting. And third, as an adult you can't just write what you would easily write and say. You have to scale it to a child's cognitive abilities. Angela wasn't so bad because she was an 18-year old voice, whereas, we're working with Geppetto Avatars on a children's health management app that's targeted for 6-year-olds, and it's been a real problem for us to think like a child and write for the child. The child's sense of humor is different from ours, and they love repetition.


What is the World Turing Test competition like?

Wilcox: It's a random mess. The qualifiers ask human knowledge questions like "which is bigger, a pine nut or a pine tree." And if Tom is taller than John who is taller than Sue, who is the shortest. The top four scores then get the human judges in competition, where they can do anything. Maybe conversation, maybe trying typos, trick questions, etc. And lots of random failures of hardware and protocol. Angela only came in second last year because a judge kept trying to use carriage returns to separate out the paragraphs, but because of our setting for mobile, she responded to empty lines with more chat. It was overwhelming the poor guy.


Suzette, Rozette, Angela, and Rose -- our bots -- are the only ones to have qualified every year for the last four years.


What's a favorite moment from one of those competitions?

Wilcox: Fooling the human judge in 2010. He was a computer professor, and was being very lazy. He started by asking who the bot was voting for in the election. Implied, but not stated was that he meant the upcoming California gubernatorial election. Suzette kept trying to avoid and dodge, but he kept repeating ad nauseum, often by cut and paste. She detected his repeats and got madder and madder, then bored. He got confused and voted for her. He could have merely asked tough questions like we discussed above, but he never did.


How do you think your bots have changed the way people interact with AI?

Wilcox: The more our bots are out there, the more people are willing to suspend disbelief and believe they are talking to a person. Even when they know they're not, they enjoy the conversation. Now we're working to bring them into the real world by working with robotics companies to give the bots personality and voice control.


How has spending so much time over the last few years working with AI and chatbots affected the way you interact with people in the real world?

Wilcox: There are people out there? Nowadays we constantly notice conversation, and conversational conventions and dynamics. And we strive mightily to share and go back and forth instead of just doing monologues. We are very aware that a conversation is a shared construct.



Translate

Like fanpage

Popular Post

Blog Archive

Powered by Blogger.

- Copyright © News and design logo -Metrominimalist- Powered by Blogger - Designed by Johanes Djogan -