Robots engage in high level planning when executing real world tasks. We queried Tell Me Dave for a high level plan describing how to distribute three mugs among three tables. Since Baxter is not mobile, we simulated three tables by constructing three pedestals which rest inside of Baxter’s work space.
Here is a video showing Baxter distributing three mugs on three pedestals of different heights in response to the natural language input “Distribute the mugs on the tables” :
We like bringing kids and robots together, and we recently shot this video of Stefanie’s son Jay interacting with Baxter:
It’s Jay’s new favorite robot video!
We received a set of the YCB objects last week and decided to complete the Protocol and Benchmark for Table Setting. We attained a score of 10/24! See the video:
Our approach was to use Baxter to autonomously collect visual models for the objects, annotate grasps, and then program the robot to move the objects to predefined positions on the table. Placement was challenging because the table setting doesn’t fit entirely within the robot’s kinematic space, so it drops some objects from a height. We could probably improve placement with more careful destination annotations, or by using vision to recognize the colors in the target region. The plate was challenging for Baxter because it barely fits within the robot’s kinematic space. It was very difficult for us to plan grasps on the plate so we left it out, but if we were doing fancier motion planning, the robot could probably pick it up. We were pleased to be able to recognize and manipulate five out of six of the objects in very little time using our software stack!
To run this benchmark, we had to create a new target template, because the one provided was too small to contain the YCB objects. You can get our template here: