All posts by dwhit

Reducing Errors in Object-Fetching Interactions through Social Feedback

Humans communicating with other humans use a feedback loop that enables errors to be detected and fixed, increasing overall interaction success. We aim to enable robots to participate in this feedback loop so that they elicit additional information from the person when they are confused and use that information to resolve ambiguity and infer the person’s needs. This technology will enable robots to interact fluidly with untrained users who communicate with them using language and gestures. People from all walks of life can benefit from robotic help with physical tasks, ranging from assisting a disabled veteran in his home by fetching objects to a captain coordinating with a robotic assistant on a search-and-rescue mission.

Our latest paper defines a mathematical framework for an item-fetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests y reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the item delivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants.  Our model is 2.17 seconds faster (25% faster) than state-of-the-art baseline, while being 2.1% more accurate.

You can see the system in action in this video:  when the user is close to the robot, it is able to interpret the gesture and immediately selects the correct object without asking a question.  However when the user is farther away, the pointing gesture is more ambiguous.  The robot asks a targeted question.  After the user answers the question, the robot selects the correct object.  For more information, see our paper, which was accepted into ICRA 2017!

Grippers On The Robot!

In more student work, our own Eric Rosen created a song and dance for our robots to better communicate with children. Check it out below!

Understanding the basic capabilities of robots will be important for everyone when they are integrated into our every day lives. Without this knowledge, human-robot interactions will be poor not only due to the inability to best utilize each agent’s skills, but may even lead people to fear what they don’t understand. There are many things that are scary when we first encounter them as young children, but become less so as we become accustomed to them. Educators use songs and dances in order to engage young students in a fun way to learn about common day things they will encounter in their life, such as the wheels on a bus. But what song and dance can you do with your child to teach them about robots?

We at the H2R lab made our own child-robot-song-and-dance, “The Grippers on the Robot” (Sung to the tune of “The Wheels on the Bus”). Anyone can sing along and dance with Baxter through 3 preprogrammed dance sequences: The Grippers on the Robot go open and close; The servos on the robot go roll, pitch, yaw; The IK on the robot goes plan plan plan. This allows for young children to have a fun experience with a robot, and even start to understand how a robot navigates and manipulates the world!

If you want to let people dance and sing with your Baxter, check out the code on the github link or email me, eric_rosen@brown.edu!

https://github.com/ericrosenbrown/Robot-Song

Baxter Bowling!

This summer, we had some great high school students work on projects involving our robots. Anisha Agarwal was one of those students. She built a bowling routine for Baxter. Here is here project!

With the ability to pick and place objects comes a surprising amount of power. Picking up objects and placing them down is the basis for setting a table, drawing, building block structures, playing numerous games and more. We decided to use this power to teach Baxter how to bowl. The bowling program sets up bowling pins and knocks them down by rolling a ball towards them. Baxter sets up 3 bowling pins from the home area into an area on the other end of the table. Baxter also picks up a “bowling” ball (although, for our purposes, a golf ball worked better), swings its arm and releases the ball towards the upright pins.

Occasionally, Baxter accidentally knocks down a pin in its attempt to place another one nearby. Also, a very specific gripper setting is necessary, such that the grippers are wide enough for the ball, but slim enough to grasp the thinnest portion of the bowling pins. Also, since all 3 pins and the bowling ball are presented to Baxter at once, it can be difficult to arrange them so they aren’t close enough together to confuse the robot, but also not so far apart that certain pieces are outside of the range where the arm can reach.

Despite these limitations, it’s exciting to watch Baxter setting up and knocking down pins!

Amazon Echo + ROS + Baxter

Hey! So here’s a cool video of us using an Amazon Echo to control one of our Baxters. The Echo definitely outperforms our previous speech-to-text methods, and we like using it.

If you’re interested in finding out about how we did it, check the code here, or shoot me an email at david_whitney@brown.edu.

Initial Data from the Million Object Challenge

We are releasing a teaser data set of N objects for the Million Object Challenge.  This data consists of objects mapped using Baxters at our site.  We include objects from the YCB data set, as well as other objects arranged into several object categories.

Download the data here: http://cs.brown.edu/~stefie10/mocTeaserData2016-02-22.tar.gz.

The archive contains a stand-alone python program to show how to read and parse the data.

NERC 2015

We are excited to announce the 4th edition of NERC.  It will be held on Saturday, November 7th at WPI.  More details are at http://nerc.mit.edu.  We have an exciting lineup of speakers including Leslie Kaelbling, Drew Bennett, and Sangbae Kim.
It’s hard to believe that NERC is four years old.  It’s not what I expected when we founded the event back in 2012.  It’s great to see so much energy and excitement around robotics in the Northeast!

Announcing BurlapCraft

We are announcing the release of BurlapCraft 1.1.  BurlapCraft is a mod for Minecraft that allows an autonomous agent to be controlled by BURLAP, the Brown-UMBC Reinforcement Learning and Planning Library.  Performing experimental research on robotic platforms involves numerous practical complications, while studying collaborative interactions and efficiently collecting data from humans benefit from real time response. To circumvent these complications, we have created a Minecraft mod called BurlapCraft which enables the use of the reinforcement learning and planning library BURLAP to model and solve different tasks within Minecraft.   BurlapCraft makes reinforcement learning and planning easier in three core ways:

  • the underlying Minecraft environment makes the construction of experiments simple for the developer and so allows the rapid prototyping of experimental setup;
  • BURLAP contributes a wide variety of extensible algorithms for learning and planning, allowing easy iteration and development of task models and algorithms;
  • the familiarity and ubiquity of Minecraft makes it easy to recruit and train users yet includes very challenging tasks which are unsolvable by existing planners.

To validate BurlapCraft as a platform for AI development, we have demonstrated the execution of A*, BFS, RMax, language understanding, and learning language groundings from user demonstrations in five Minecraft “dungeons.”

Here is a video of a Minecraft agent being trained to understand natural language commands from examples:

The technical approach we’ve taken to language learning is described here.

Try it now!  You can try it out by downloading our mod jar file and following these instructions to teach the agent more commands.  You can also find source code and detailed instructions on installing and using the mode here.

Our paper describing the mod was recently published in the Artificial Intelligence for Human-Robot Interaction AAAI Fall Symposium :
Krishna Aluru, Stefanie Tellex, John Oberlin, and James MacGlashan. Minecraft as an experimental world for AI in robotics. In AAAI Fall Symposium, 2015.

More information about this project is here.   You can also read about our work on learning to plan in large state spaces like Minecraft.

Our work outlines challenges that will motivate the development of better planning and learning algorithm techniques that can be quickly validated and compared to other work. In the future, we would like to be able to leverage the user base that comes with Minecraft to collect new and interesting data for such tasks and develop algorithms to solve them.