NERC 2015

We are excited to announce the 4th edition of NERC.  It will be held on Saturday, November 7th at WPI.  More details are at http://nerc.mit.edu.  We have an exciting lineup of speakers including Leslie Kaelbling, Drew Bennett, and Sangbae Kim.
It’s hard to believe that NERC is four years old.  It’s not what I expected when we founded the event back in 2012.  It’s great to see so much energy and excitement around robotics in the Northeast!

Announcing BurlapCraft

We are announcing the release of BurlapCraft 1.1.  BurlapCraft is a mod for Minecraft that allows an autonomous agent to be controlled by BURLAP, the Brown-UMBC Reinforcement Learning and Planning Library.  Performing experimental research on robotic platforms involves numerous practical complications, while studying collaborative interactions and efficiently collecting data from humans benefit from real time response. To circumvent these complications, we have created a Minecraft mod called BurlapCraft which enables the use of the reinforcement learning and planning library BURLAP to model and solve different tasks within Minecraft.   BurlapCraft makes reinforcement learning and planning easier in three core ways:

  • the underlying Minecraft environment makes the construction of experiments simple for the developer and so allows the rapid prototyping of experimental setup;
  • BURLAP contributes a wide variety of extensible algorithms for learning and planning, allowing easy iteration and development of task models and algorithms;
  • the familiarity and ubiquity of Minecraft makes it easy to recruit and train users yet includes very challenging tasks which are unsolvable by existing planners.

To validate BurlapCraft as a platform for AI development, we have demonstrated the execution of A*, BFS, RMax, language understanding, and learning language groundings from user demonstrations in five Minecraft “dungeons.”

Here is a video of a Minecraft agent being trained to understand natural language commands from examples:

The technical approach we’ve taken to language learning is described here.

Try it now!  You can try it out by downloading our mod jar file and following these instructions to teach the agent more commands.  You can also find source code and detailed instructions on installing and using the mode here.

Our paper describing the mod was recently published in the Artificial Intelligence for Human-Robot Interaction AAAI Fall Symposium :
Krishna Aluru, Stefanie Tellex, John Oberlin, and James MacGlashan. Minecraft as an experimental world for AI in robotics. In AAAI Fall Symposium, 2015.

More information about this project is here.   You can also read about our work on learning to plan in large state spaces like Minecraft.

Our work outlines challenges that will motivate the development of better planning and learning algorithm techniques that can be quickly validated and compared to other work. In the future, we would like to be able to leverage the user base that comes with Minecraft to collect new and interesting data for such tasks and develop algorithms to solve them.

Twin Operation

On Friday we visited Rethink Robotics to install our software stack on their robots. They have agreed to help out with our scanning project. We calibrated three of the four arms; the fourth wrist camera had some kind of problem (perhaps hardware?) that led to an image that was too noisy to be useful. We have never gotten to use two robots at once before, and it was exciting to see them both moving and scanning objects. Our scanning stack is rapidly maturing, and we are on our way to scanning one million objects! More information about the research project is available in our Blue Sky Ideas paper.

Placing Mugs On Pedestals With Tell Me Dave

Robots engage in high level planning when executing real world tasks. We queried Tell Me Dave  for a high level plan describing how to distribute three mugs among three tables. Since Baxter is not mobile, we simulated three tables by constructing three pedestals which rest inside of Baxter’s work space.

Here is a video showing Baxter distributing three mugs on three pedestals of different heights in response to the natural language input “Distribute the mugs on the tables” :

 

Setting the Table with YCB Objects

We received a set of the YCB objects last week and decided to complete the Protocol and Benchmark for Table Setting. We attained a score of 10/24!  See the video:

Our approach was to use Baxter to autonomously collect visual models for the objects, annotate grasps, and then program the robot to move the objects to predefined positions on the table.  Placement was challenging because the table setting doesn’t fit entirely within the robot’s kinematic space, so it drops some objects from a height.  We could probably improve placement with more careful destination annotations, or by using vision to recognize the colors in the target region. The plate was challenging for Baxter because it barely fits within the robot’s kinematic space. It was very difficult for us to plan grasps on the plate so we left it out, but if we were doing fancier motion planning, the robot could probably pick it up.  We were pleased to be able to recognize and manipulate five out of six of the objects in very little time using our software stack!

To run this benchmark, we had to create a new target template, because the one provided was too small to contain the YCB objects.  You can get our template here: