Shared December 4, 2017
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5.
At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.
“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.
Full Story: https://news.berkeley.edu/2017/12/04/...
Featured researchers: Assistant Professor, Sergey Levine, doctoral student, Chelsea Finn, graduate student, Frederik Ebert
Video by Roxanne Makasdjian and Stephen McNally
Music: "Plastic of Paper" by Wes Hutchinson, "New Phantom" and "Believer" by Silent Partner
Robots bootstrapped through learning from Experience
Security Robot (Robot guard) Rover S5
Agile Justin classifies materials by touch using deep learning
How to make an Arizona penny can alcohol stove
Bimanual Dexterous Manipulation for Autonomous Service Robots
Researchers Develop a Robot to Tackle Mundane Farm Tasks
BRETT the Robot learns to put things together on his own
Robotic Grasp Planning by Learning
Multi-Contact Balancing for Torque-Controlled Humanoid Robots
Setting up a Reinforcement Learning Task with a Real-World Robot
Robots : What Tasks Can Robots Perform?
Redesigning Space Tech with Soft Robotics and Mechanical Counterpressure