So, robot companions. Not something I’d thought about a whole lot before I went to Ruth Aylett‘s talk on Sunday. But a tremendously interesting concept.
Various stories about robots that will interact with us come up in the media now and again – robots that cam play football, the perennial variations on the Turing Test and the telepresence robots that seemed to be all over the scientific media for a while then died off (apparently they’re still being researched, and the robot scientist introducing the talk had seen one in action at a wedding). But there are many reasons that robot companions could be desirable in the home (or in hospital etc) – both for practical help for people with disabilities, and for companionship.
At the start of the talk, Ruth Aylett showed a variety of pictures and video of different robots to ask which ones we’d be comfortable with as a robot companion. Classically robot-shaped robots, fuzzy robots, cute robots, utilitarian robots like the Roomba… For some of the robots there was complete agreement as to whether they’d make a good companion (no-one wanted a pet drone) but on others I was interested to see the divide. When we were shown a video of ‘creepy’ robots, I actually found one of them very appealing!
The reason the ‘creepy’ robots looked ‘creepy’ was because of the concept called the ‘uncanny valley‘ (which I’d heard of before in relation to animation – though I hadn’t heard its alternative name, the ‘zombie zone’). When we start with a totally mechanical robot and start to make it look more human-like, it becomes more and more likeable, until it gets to the nearly-but-not-quite human stage, at which point it becomes strangely repulsive, or ‘uncanny’.
So one of the ideas with robots that will be companions is that it’s not a good thing to make them look nearly-human (and it would be enormously difficult to make them look genuinely human). Instead, they could look more like something we know we relate to well, like a cartoon character, or a pet.
There’s a real reason to make the robot human- or pet-like in its ability to express emotions. We are emotional creatures and if we can react to robots using that ‘relating’ skill rather than pure logic, we’ll likely get on with them better.
This was the ‘lightbulb moment’ of the talk for me. Ruth Aylett was very convincing about us humans not being ‘brains on legs’ – she gave a really excellent answer to a question about at what point technically-augmented humans would stop being human. She constantly emphasised that what makes us human is the fact that we are alive and have a vast range of body processes, not just conscious logical thought. We use empathy and emotion in all our interactions. So if we’re going to share our lives with a robot, we probably want it to work in a way that we’re happy to have it around.
My favourite example was a robot that doesn’t have anything resembling a face, but it does have body language. When it wants to get your attention, it does a very obvious ‘pay attention to me’ dance and then turns round to lead you to the focus of its attention. It can also ‘wag its tail’. I thought this was very smart and intuitive.
And we also had a robot in the room to say hi to! Sarah is a research robot that is used in the lab at Heriot Watt. As she said to us. “while I have no arms and few sensors, I do have a great expressive head”.
She also has a voice that sounds very human – apparently we’re not as freaked out by nearly-human voices and we are by nearly-human appearance and movement. I was sure she was Irish but apparently she’s Scottish. A nice voice, either way! It sounds like the voice is constructed similarly to the project that I donated my voice to.
I liked that Sarah can jump into a TV screen or a smartphone if you want to take her with you somewhere. In the smartphone, she even has extra sensors since she can use the accelerometers. In the TV, we could interact with her using a Kinect to select the answers to her questions (and she’d show us how she felt about the answers!)
I’m intrigued to see how well this works out over the years to come. Done well, it seems that there could be loads of helpful uses of robots that interact with humans. Done badly, we could be stuck in a world of free-standing Microsoft Paperclips. As an introvert, I hate programs and interfaces that are overly cheery, so I liked the idea that you could dial down a robot’s personality to be more melancholy!