When humans are collaborating together, they are constantly communicating their intents through body language and speech in order to arrive at a common understanding. Similarly, we would like to enable robots to be able to communicate their intents so that robots and humans can quickly converge to solving the same problem and avoid miscommunication issues. A particularly important intent for robots to communicate to humans is motion, because of the safety concerns involved with a robot actuating in close-quarters with a human. Human-robot collaboration would be safer and more efficient if robots could communicate their motion intent to human partners in a quick and easy manner.
In our latest paper, we propose a mixed-reality head-mounted display interface that allows visualization of a robot’s future poses over the wearer’s real-world view of the environment. This allows a human to view the entire planned motion of the robot in the real workspace before it even moves, removing any potential issues of testing a real motion plan in the environment. To measure our interface’s ability to improve collaboration speed and accuracy, we conducted an experiment with real world users to compare our interface to traditional and no visualization techniques. We found our interface increased accuracy by 16% and a 62% decrease in the task completion compared to traditional visualizations.
If you’d like to see a demo of the interface yourself, check out the video below! Watch as the robot visualizes different motion plans that attempts to move the arm from one side of the table to the other without hitting any of the blocks. As opposed to traditional monitor and keyboard interfaces, our MR headset allows users to inspect the real scene quickly and efficiently. The code for the system is available on Github here. For more information, see our paper, which was accepted into ISRR 2017!