Lorek the robot represents a big step in robotics because it can understand human language, as well as the gestures we make in conversations. According to Wired, researchers from Brown University pulled off this feat of understanding by programming uncertainty into the robot. Here’s why this is a big deal.
Lorek The Uncertain Robot
What does it mean for a robot to be uncertain? In Lorek’s case, it can ask a human questions about an object that the human wants:
It not only recognizes an object a human being is pointing at and talking about, but asks questions to clarify what they mean. Lorek is limited to trafficking in specific objects, sure, but the robot is a big deal for the budding field of human-robot interaction.
The experiment involves a human and Lorek the robot, which is a repurposed Baxter industrial robot, sitting at a table. The table has six objects on it, such as bowls and spoons. The human, wearing a special headset with a microphone, points to an object and asks “Can I have that bowl?” Lorek, wearing a Microsoft Kinect, uses object recognition and motion tracking to figure out which object the human meant.
Lorek’s hand hovers over the object and asks, “This one?” If it’s the correct object, the human says yes. If the human says no, Lorek therefore knows it’s the other bowl the human wants.
But sometimes two bowls will be next to each other, and Lorek hovers over the one that it thinks the human wants, and asks to make sure it picked the right one. In another variation, the human will try to trick the robot and ask “Can I have that bowl?” while pointing to a spoon. Lorek figures it out and selects the right bowl.
One of the co-creators of Lorek, Stefanie Tellex, explained:
The real innovation in what we’re doing is what we call social feedback…So what we’re trying to do is not just listen and watch and then act, but assess the robot’s own certainty about what the person wants it to do.
Why It Matters
The technology behind Lorek lets robots understand humans better, and this is what Apple, Amazon, Google and others are trying to solve. I discussed this last week in an article about natural language processing. As I stated, natural language processing is the grand goal for artificial intelligence. Having an AI understand what you’re saying will greatly enhance products like Siri.
I wasn’t able to find any information on the technology behind Lorek, but it sounds like it could involve fuzzy logic. This is a system of computer logic that tries to improve a computer’s understanding of natural language. Since human language is more nuanced than the 1s and 0s of a computer’s binary systems, fuzzy logic attempts to work around this. While classical logic only allows conclusions that are either true or false, fuzzy logic lets computers act more like a human brain and use quick judgements based on inexact or partial knowledge. Basically, fuzzy logic lets computers use generalizations and guesses.
If Apple can incorporate this kind of learning into Siri, you could start having better conversations with her. Siri could ask YOU questions to better understand what you mean. Here’s another thought. Many of the rumors surrounding the iPhone 8 involve augmented reality. What if you could point your iPhone’s camera at a pair of shoes, and ask Siri where you can buy a pair?
Like Lorek the robot in the experiment, Siri could ask you questions about the shoes, like what size you wear, what color you want, and what your price range is. Using this information, Siri could search the web using your inputs. The technology behind Lorek isn’t mature yet, but if the researchers publish the information, companies like Apple could use it to improve their own products.