A social robot that can keep up a conversation will stand out anywhere, and so does Furhat. It doesn't have a body, it doesn't have "muscles," and it once wore a fur hat to cover the many cables coming out of the prototype's head, but Furhat leaves an impression. Preben Wik, also known as Ben, is one of the members of the team who created Furhat, one of the world's most advanced social robots and a co-founder of Furhat Robotics. He worked as an engineer for a long time and he's a Doctor of Philosophy, having focused on speech communication, which is pretty much what helped Furhat become so skilled in chatting with us all.
During the Bucharest Tech Week, which took place in May, Preben Wik took the time to have a sitdown with TechNadu so we could discuss AI, the challenges of developing a social robot, its many uses, and many other topics.
TechNadu: Many prominent voices in the tech industry, including Elon Musk, have expressed fears that singularity will come and what that will mean for humanity. Do you think this will ever happen? Is this something that could take place in a very distant future?
Preben: If we refer to Elon Musk, I think he has also made a point that it's not the physical robots that we need to be afraid of. That is something that Hollywood has made up. So the dangers of A.I. maybe they could be inhabited in their physical presence, but it's not the physical presence that makes it scary in any way. And so you could have an A.I. that is just in the cloud or something, and that could cause damage.
I think we have some extremely smart people that are concerned and I think we have some extremely smart people that are not concerned. And not only smart but well-informed. The world is kind of divided into these two different groups. You have someone like the CEO of Google Larry Page who says "eh, it's no problem." We have Ray Kurzweil, the scientific chief of Google, who has his own written books where he's like "everything is going to be great." And when you're working in the field, it's convenient to jump into that group because you can justify what you're doing. And for a while, I used to quote a guy, Andrew Ng, who said: "I'm not concerned about the dangers of A.I. for the same reason that I'm not concerned about the people overpopulating Mars." Maybe sometime in the future, it will happen, but it's so far in the future that we should worry about other things. And I think there's a point to that.
Certainly, when you see where we are today and understanding how primitive things are, it's convenient to take that stance. But at the same time, I think there is a reason to listen to these other people. And then there's an analogy that I like a lot, I've heard it from this Swedish guy who is a professor at M.I.T., Max Tegmark. He says, "A.I. is like rocket fuel." So it's something that we're excited about, and it will make us be able to go very far. But a rocket needs more than fuel. It needs a steering mechanism, and you need a destination. And so one thing we should talk about is where do we want to go with A.I. Because it is very, very powerful and if we don't know how to steer it and we don't have a clear vision of where we want to go, it might end up being really bad. And so it could be the best thing that ever happened to humanity or the worst thing that ever happened to humanity. And it's up to us. So we need to have that conversation, and they say maybe this is the most important conversation ever for humanity.
TechNadu: And do you think this is the place where governments should step in and bring some kind of regulations?
Preben: Yeah. I think they don't know enough. Back to Max Tegmark, he started an organization called Future of Life. Elon Musk gave the organization $10 million as a gift to get things going because he likes what they're trying to do. They had some conferences where they invited all the famous A.I. researchers to sign some agreements. "Can we all agree on these 23 or 24 points or something?" And some of them are about privacy, some other about security and so on. And some of the agreements are for example, "Can we agree to ban A.I. weapons?" We have made international agreements about banning biological weapons and chemical weapons. And we should discuss A.I. weapons in the same way.
You know someone can cheat, but it's still like if you have a very clever Ph.D. student in chemistry and you approach this person and say "hey you want to work for me, I'm gonna do some chemical weapons." He will likely say "no, that's not cool." But if you do the same thing with A.I. today it will come from someone like the American Defense or something and then maybe he will be honored and take a great deal. So these type of questions, yes, they need political [intervention], we need to agree on what is OK and what is not OK. So what should you use this "rocket fuel" for?
TechNadu: I think Google had projects that they were helping the Defense Department with using machine learning on some maps and they backed out after a lot of backlashes.
Preben: I'm not sure if we're talking about the same thing, but I think it's the same thing, that the employees of Google protested. But yeah I think the employees said: "no, we don't want to work on this." And that's excellent. Yes but then if you have another powerful company where the employees don't do it, then they can do a lot of damage.
TechNadu: That's a show of ethics from the employees but a bit of a concern with Google. But then, if Google doesn't do it, someone else will.
Preben: So there is a reason to be concerned, I think. Having said that, if we look back 100 years, and we see how oil was used, which is the cradle for the whole industrial revolution, and if there were some visionary at that time saying like all but you could use it for bad things also we should ban, we should make it illegal to use oil... Actually, come to think of it now, the whole climate change thing is cost on that. So maybe it would have been clever for them to say that "let's not." But you know we cannot stop researchers from doing research and have them say "oh let's ban A.I." Even if someone did, it wouldn't work. So I think the curiosity of human beings is not going to stop. And so what we need to do is discuss where do we want to go? What do we want to do? And the whole thing about us as humanity, where do we want to go?
I think the whole issue around digitization and mobilization taking all the jobs away is a similar issue that yes, it might happen, and it might be a very good thing. If we have an economic system and distribution policy that enables all humanity to benefit from the good, then it can be a very, very good thing. But if it's only the rich and powerful that will benefit from everything, then it can be very bad.
TechNadu: OK so let's talk a bit about Furhat because it's a pretty cool device. What makes Furhat different from other similar robots/AI-powered humanoids. Let's do a bit of a "surgery" on it.
Preben Wik: Yeah. So one of the unique features of Furhat is that it's not mechatronic. It is a back-projected system which means we get the power of computer graphics that you have in games and movies and so on onto a physical robot and without the limitations that you get on a mechatronic robot. All these muscles that we have in the face, for example, - we have 50 some muscles in the face that all have to work in synchrony with each other to make smooth lip synchronization and other emotional gestures - and no one is able to do that in a good way. So it looks a bit creepy on the mechatronics robots and it's noisy, it's very expensive, it breaks down, et cetera, et cetera, whereas this is a software update and it's quiet, and it's less expensive. And it gives us the flexibility to change the texture so that it can look like a man or a woman, or Albert Einstein, or an avatar, or Obama, or whatever. And if you would do the same thing with the mechatronic solution, you'd pay a million dollars and then it looks like one thing. If you want to change it, you have to pay another million.
TechNadu: So you discuss this briefly earlier during the conference. There is a bit of a conflict. Do we want robots to look like robots, so we know they are robots or do we want them to look more like humans which makes it easier to interact with them but also more on deceiving in a sense?
Preben: I think we should look at this as an interface. And so there's a front end, and there's a back end, and the back end can be anything - it could be an IBM Watson or Google Home, or it could be something that we had built. But, as a front end, as an interface, we should do whatever works best for the interface. So I think an Alexa or a Google Home is limited in that sense because it doesn't smile, it can blink to show that it's paying attention, but you have to learn how to interact with it, and it has some clear limitations.
So we're building on the evolution of what we have done for 50000 thousand years. If it looks like a robot, it's good to send some clear messages that it is not a human being, but it should also send the clear interface messages. You can interact with it this way and then by having human features, that we know how to interact with, it opens up. It allows people to interact the way they expect. If it works. I mean the vision is clearly that it will work. But if we could interact with technology the way we interact with each other we don't need an instruction book, we know how to interact with this thing already as well. So that would be the benefit.
TechNadu: What made you want to build a project like Furhat? Where did that desire come from, and what were your goals? What uses do you see for this technology?
Preben: I see those as two very different questions. And I think somewhere the basic curiosity and the desire to understand "who am I" makes anyone who is doing research in this field have this desire to understand themself. Similarly, people who study psychology do it because they want to understand themselves, and then they use it to help others. When it comes to the usage thing, it's back to the fact that a social robot is good at doing things where the social aspect of things becomes important. There are so many things where you can just push a button to start it or to flick the switch. It's better to flick the switch than to say turn on the light. So maybe a speech interface to a light switch is not necessary, and maybe it's even a worse interface. I don't know, maybe it's good to have them both, but it doesn't add a lot. But where you want to have a rich interaction, then the richness comes from all these dimensions that we have in human communication and the speech is only one dimension.
TechNadu: So what purpose do you see for your robots?
Preben: There are some obvious verticals that we have identified but I think just like Apple didn't know which would be the killer apps in their app store, I hope the same thing and I think the same thing would happen for Furhat and social robotics - that we have not figured out all the "killer apps."
But education, information providers like hotel receptionist or someone at the airport or a train station or things like that, entertainment, health services. In hospitals, we have someone doing some kind of a medical screening thing with a Furhat, but you could also see it in the combination of health and education for autistic children. And maybe one of my favorites is social simulations, that's kind of like education. You want to train a psychologist. So you make a virtual human with a problem, and you can program the problems and you train. Anyone who wants to train a social situation can do it. It can be very useful and a good thing.
TechNadu: You talked about the Furhat OS during your presentation. What exactly is it, and how do you plan to expand it to implement it, to share it. Will it be another Windows-like OS?
Preben: In a way. Although it's on top of Windows, so it's an abstraction layer where we're using a regular programming language. Now we're using Kopplin as the basic programming language. But like I mentioned before, if you want to build something for the phone, you will have buttons and fonts and colors and things like that. When you want to build something for human-robot interaction, you need other building blocks. And so we created a kind of programming language for human interaction and then because it has humanoid features it should be able to do things like smile. So that would be "Furhat.dot.smile" and then all the underlying of what it actually means is part of the operating system. So the developer can say "Furhat.dot.smile" and the rest is magic.
TechNadu: When seeing Furhat in action and how social it is and seeing the presentation this morning at the conference, I kept thinking about the "Her" movie. Are we somewhat at risk of reaching that level of connectivity with a social robot?
Preben: You mean the falling in love part?
TechNadu: Well, friendship, maybe.
Preben: If we get to that point of capability... Because that was the unrealistic part of Her - that she was so good in conversation and everything. This is very very difficult and this is what we're struggling with and we're not able to do it yet. But if we could have a friend, a digital friend, I don't see anything bad with that except that some people will maybe not seek analog friends. It can be easier to talk to.
And that is kind of what is happening today, even without this one. I have a son who is 16 and he's spending on his time playing computer games and he's online and he has lots of his friends somewhere. So the whole "in real life" thing is becoming shorter and shorter and smaller and smaller so it's not really because of a robot as much as it's an issue with the digitization of the world.
What do you think about what Ben said? Let us know by dropping a comment in the section below the article. Share the interview with friends online so others can discover Furhat too. Follow TechNadu on Facebook and Twitter for more interviews, tech news, guides, and reviews.