Category Archives: Publication

Screwing a Nut onto a Bolt with Vision

Using light field methods, we can use Baxter’s monocular camera to localize the metal 0.24′ nut and corresponding bolt.  The localization is precise enough to allow the robot to use the estimated poses to then perform an open-loop pick, place, and screw to put the nut on the bolt.  The precise pose estimation enables complex routines to be quickly encoded, because once the robot knows where the parts are, it can perform accurate grasps and placement actions.  You can read more about how it works in our RSS 2017 paper.

Robotic Overlords?

Our group was featured in this New Yorker article, showcasing Rebecca Pankow and John Oberlin’s work programming Baxter to pick petals from a daisy, as well as some of my thoughts on inequality and automation.  I was thrilled with Sheelah’s work on this very important issue, focusing on the effects of automation and our changing economy.

Our Baxter robot picks petals from a flower.

 

Comparing Robot Grasping Teleoperation across Desktop and Virtual Reality with ROS Reality

There are many tasks that are too dangerous for humans to perform that would be better suited for a robot, such as defusing a bomb or repairing a nuclear reactor. Ideally, these robots would be autonomous, but currently, robots are not able to perform all tasks on their own yet. For robots to help with these problems today, they are directly controlled from afar by a human user, in an act called teleoperation. With this work, we set out to develop a teleoperation interface that is as intuitive and efficient as possible for completing the task.

We developed a virtual reality interface to allow novice users to efficiently teleoperate a robot and view it’s environment in 3D. We have released an open-source ROS package, ROS Reality, which allows anyone to connect a ROS network to a Unity scene over the internet via websockets. ROS topics can be sent to the Unity scene, and data from the Unity scene can be sent to the ROS network as a topic. This allows a human to perceive a scene and teleoperate the robot in it to perform a complex task, such as picking up a cup, as simply as they would in real life. We conducted a user study to compare the speed of our interface to traditional teleoperation methodologies, such as keyboard and monitor, and found a 66% increase in task completion under our system.

Below is a video of our system being used to teleoperate a Baxter robot at MIT from Brown University (41 miles away). Since our bandwidth requirements are about the same as a Skype call, we are able to establish a relatively low-latency connection that allows 12 cups to be easily stacked in a row. For more information, please check out our paper, which was accepted to ISRR 2017!

Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays

When humans are collaborating together, they are constantly communicating their intents through body language and speech in order to arrive at a common understanding. Similarly, we would like to enable robots to be able to communicate their intents so that robots and humans can quickly converge to solving the same problem and avoid miscommunication issues. A particularly important intent for robots to communicate to humans is motion, because of the safety concerns involved with a robot actuating in close-quarters with a human. Human-robot collaboration would be safer and more efficient if robots could communicate their motion intent to human partners in a quick and easy manner.

In our latest paper, we propose a mixed-reality head-mounted display interface that allows visualization of a robot’s future poses over the wearer’s real-world view of the environment. This allows a human to view the entire planned motion of the robot in the real workspace before it even moves, removing any potential issues of testing a real motion plan in the environment. To measure our interface’s ability to improve collaboration speed and accuracy, we conducted an experiment with real world users to compare our interface to traditional and no visualization techniques. We found our interface increased accuracy by 16% and a 62% decrease in the task completion compared to traditional visualizations.

If you’d like to see a demo of the interface yourself, check out the video below! Watch as the robot visualizes different motion plans that attempts to move the arm from one side of the table to the other without hitting any of the blocks. As opposed to traditional monitor and keyboard interfaces, our MR headset allows users to inspect the real scene quickly and efficiently. The code for the system is available on Github here. For more information, see our paper, which was accepted into ISRR 2017!

 

Reducing Errors in Object-Fetching Interactions through Social Feedback

Humans communicating with other humans use a feedback loop that enables errors to be detected and fixed, increasing overall interaction success. We aim to enable robots to participate in this feedback loop so that they elicit additional information from the person when they are confused and use that information to resolve ambiguity and infer the person’s needs. This technology will enable robots to interact fluidly with untrained users who communicate with them using language and gestures. People from all walks of life can benefit from robotic help with physical tasks, ranging from assisting a disabled veteran in his home by fetching objects to a captain coordinating with a robotic assistant on a search-and-rescue mission.

Our latest paper defines a mathematical framework for an item-fetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests y reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the item delivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants.  Our model is 2.17 seconds faster (25% faster) than state-of-the-art baseline, while being 2.1% more accurate.

You can see the system in action in this video:  when the user is close to the robot, it is able to interpret the gesture and immediately selects the correct object without asking a question.  However when the user is farther away, the pointing gesture is more ambiguous.  The robot asks a targeted question.  After the user answers the question, the robot selects the correct object.  For more information, see our paper, which was accepted into ICRA 2017!