Setting the Table with YCB Objects

We received a set of the YCB objects last week and decided to complete the Protocol and Benchmark for Table Setting. We attained a score of 10/24!  See the video:

https://youtu.be/Rk2IqhwOPSI

Our approach was to use Baxter to autonomously collect visual models for the objects, annotate grasps, and then program the robot to move the objects to predefined positions on the table.  Placement was challenging because the table setting doesn’t fit entirely within the robot’s kinematic space, so it drops some objects from a height.  We could probably improve placement with more careful destination annotations, or by using vision to recognize the colors in the target region. The plate was challenging for Baxter because it barely fits within the robot’s kinematic space. It was very difficult for us to plan grasps on the plate so we left it out, but if we were doing fancier motion planning, the robot could probably pick it up.  We were pleased to be able to recognize and manipulate five out of six of the objects in very little time using our software stack!

To run this benchmark, we had to create a new target template, because the one provided was too small to contain the YCB objects.  You can get our template here: