Monthly Archives: November 2017

Picking a Fork from Water

This video demonstrates Baxter picking a metal fork from a sink filled with running water.  The light field technology uses image averaging to mitigate the reflection of the light in the surface of the water – but the right kind of image averaging that exploits the robot’s ability to move its camera to “refocus” the image to see through the water to the fork at the bottom.   For more information, check out our RSS paper!

https://youtu.be/YCjrLfYepOQ

 

Screwing a Nut onto a Bolt with Vision

Using light field methods, we can use Baxter’s monocular camera to localize the metal 0.24′ nut and corresponding bolt.  The localization is precise enough to allow the robot to use the estimated poses to then perform an open-loop pick, place, and screw to put the nut on the bolt.  The precise pose estimation enables complex routines to be quickly encoded, because once the robot knows where the parts are, it can perform accurate grasps and placement actions.  You can read more about how it works in our RSS 2017 paper.

Picking Petals From a Flower

Rebecca Pankow and John Oberlin programmed Baxter to pick petals off of a daisy during my graduate seminar last semester, Topics in Grounded Language for Robotics.   The robot localizes the petal on the daisy using synthetic photography based on light fields, then plucks each petal off of the daisy.  It looks for the largest open space when selecting the next petal to pick.  It keeps track of the parity of the petals picked so it can either nod and smile (if the answer is, “he loves me”) or frown (if the answer is, “he loves me not.”)  This project was recently featured in the New Yorker!