Picking a Fork from Water

This video demonstrates Baxter picking a metal fork from a sink filled with running water.  The light field technology uses image averaging to mitigate the reflection of the light in the surface of the water – but the right kind of image averaging that exploits the robot’s ability to move its camera to “refocus” the image to see through the water to the fork at the bottom.   For more information, check out our RSS paper!

 

Screwing a Nut onto a Bolt with Vision

Using light field methods, we can use Baxter’s monocular camera to localize the metal 0.24′ nut and corresponding bolt.  The localization is precise enough to allow the robot to use the estimated poses to then perform an open-loop pick, place, and screw to put the nut on the bolt.  The precise pose estimation enables complex routines to be quickly encoded, because once the robot knows where the parts are, it can perform accurate grasps and placement actions.  You can read more about how it works in our RSS 2017 paper.

Picking Petals From a Flower

Rebecca Pankow and John Oberlin programmed Baxter to pick petals off of a daisy during my graduate seminar last semester, Topics in Grounded Language for Robotics.   The robot localizes the petal on the daisy using synthetic photography based on light fields, then plucks each petal off of the daisy.  It looks for the largest open space when selecting the next petal to pick.  It keeps track of the parity of the petals picked so it can either nod and smile (if the answer is, “he loves me”) or frown (if the answer is, “he loves me not.”)  This project was recently featured in the New Yorker!

 

Robotic Overlords?

Our group was featured in this New Yorker article, showcasing Rebecca Pankow and John Oberlin’s work programming Baxter to pick petals from a daisy, as well as some of my thoughts on inequality and automation.  I was thrilled with Sheelah’s work on this very important issue, focusing on the effects of automation and our changing economy.

Our Baxter robot picks petals from a flower.

 

Comparing Robot Grasping Teleoperation across Desktop and Virtual Reality with ROS Reality

There are many tasks that are too dangerous for humans to perform that would be better suited for a robot, such as defusing a bomb or repairing a nuclear reactor. Ideally, these robots would be autonomous, but currently, robots are not able to perform all tasks on their own yet. For robots to help with these problems today, they are directly controlled from afar by a human user, in an act called teleoperation. With this work, we set out to develop a teleoperation interface that is as intuitive and efficient as possible for completing the task.

We developed a virtual reality interface to allow novice users to efficiently teleoperate a robot and view it’s environment in 3D. We have released an open-source ROS package, ROS Reality, which allows anyone to connect a ROS network to a Unity scene over the internet via websockets. ROS topics can be sent to the Unity scene, and data from the Unity scene can be sent to the ROS network as a topic. This allows a human to perceive a scene and teleoperate the robot in it to perform a complex task, such as picking up a cup, as simply as they would in real life. We conducted a user study to compare the speed of our interface to traditional teleoperation methodologies, such as keyboard and monitor, and found a 66% increase in task completion under our system.

Below is a video of our system being used to teleoperate a Baxter robot at MIT from Brown University (41 miles away). Since our bandwidth requirements are about the same as a Skype call, we are able to establish a relatively low-latency connection that allows 12 cups to be easily stacked in a row. For more information, please check out our paper, which was accepted to ISRR 2017!