MIT’s Baxter running our pick and place stack!

We worked with Bianca Homberg and Mehmet Dogar from Daniela Rus’s group to install our pick and place stack on their Baxter!  We were all surprised at the differences between our robots, even though they are the “same” robot:  the camera location, calibration parameters, gripper masks all needed to change.  But once we had recalibrated everything, the robot was able to pick up the brush!  Next we will try to get it to work with their soft hand.

Contrast Agents for Viewing Transparent Objects in IR

If an object is not visible in IR, sensors such as the Kinect or an IR range finder cannot see it.  To address this problem we have developed a technique for applying a temporary contrast agent to image the object in IR.  We scan the object with the contrast agent to obtain a high-quality depth map.  After the contrast agent is removed, we localize the object with vision and incorporate the high-quality depth information based on the visual pose estimate.   This video shows our preferred method for applying the contrast agent.


Multimodal Bayes Filter

We made a video of Baxter interpreting multimodal referring expressions using our multimodal Bayes filter.  Our system interprets referring expressions in real time, outputting a distribution over objects at 14Hz.