close
close
A new model offers robots precise pick-and-place solutions | MIT News

Pick-and-place machines are a type of automated equipment used to place objects in structured, organized locations. These machines are used for a wide variety of applications—from electronics assembly to packaging and bin picking to inspection—but many current pick-and-place solutions are limited. Current solutions lack “precise generalization,” or the ability to solve many tasks without compromising accuracy.

“In industry, you often see that (manufacturers) end up developing very customized solutions for their specific problem, so a lot of engineering and not much flexibility in terms of the solution,” said Maria Bauza Villalonga PhD ’22, principal scientist at Google DeepMind, where she works on robotics and robot manipulation. “SimPLE solves this problem and provides a pick-and-place solution that is flexible but still provides the precision needed.”

A new article by MechE researchers appeared in the journal Science Robotics is exploring pick-and-place solutions with more precision. In precise pick-and-place, also called kitting, the robot transforms an unstructured arrangement of objects into an ordered arrangement. The approach, called SimPLE (Simulation to Pick Localize and placE), learns to pick up, re-grip and place objects using the object’s CAD (computer-aided design) model, all without prior experience or encounter with the specific objects.

“The promise of SimPLE is that we can solve many different tasks with the same hardware and software, using simulation to learn models that adapt to each specific task,” says Alberto Rodriguez, a visiting scholar at MIT who was formerly a member of the MechE faculty and is now deputy director of manipulation research at Boston Dynamics. SimPLE was developed by members of the Manipulation and Mechanisms Lab at MIT (MCube), led by Rodriguez.

“In this work, we show that it is possible to achieve the positioning accuracy required for many industrial pick-and-place tasks without further specialization,” says Rodriguez.

Video thumbnail

Play video

Pick and place with precision: MIT doctoral student Antonia Delores Bronars SM ’22 describes the new SimPLE (Simulation to Pick Localize and placE) system.
Video: John Freidah/MIT Department of Mechanical Engineering

The SimPLE solution uses a dual-arm robot with visual-tactile sensing and leverages three main components: task-aware grasping, perception by sight and touch (visuotactile perception), and regrasping planning. Real-world observations are matched to a set of simulated observations using supervised learning, allowing a distribution of likely object positions to be estimated and placement to be made.

In experiments, SimPLE successfully demonstrated the ability to pick up and place different objects with a wide range of shapes. For 6 objects, placement was successful in over 90 percent of cases and for 11 objects in over 80 percent of cases.

“There is an intuitive understanding in the robotics community that vision and touch are both useful, but (until now) there have not been many systematic demonstrations of how they can be useful for complex robotics tasks,” says mechanical engineering doctoral student Antonia Delores Bronars SM ’22. Bronars, who is now working with Pulkit Agrawal, assistant professor in the Department of Electrical Engineering and Computer Science (EECS), is continuing her doctoral research studying the incorporation of tactile capabilities into robotic systems.

“Most work on grasping ignores the subsequent tasks,” says Matt Mason, a senior scientist at Berkshire Grey and professor emeritus at Carnegie Mellon University who was not involved in the work. “This paper goes beyond the desire to mimic humans and shows, from a strictly functional perspective, the benefit of combining tactile perception and vision with two hands.”

Ken Goldberg, the William S. Floyd Jr. Distinguished Chair in Engineering at the University of California, Berkeley, who was also not involved in the study, says the robot manipulation method described in the paper represents a valuable alternative to the trend toward AI and machine learning methods.

“The authors combine well-founded geometric algorithms that can reliably achieve high precision for a given set of object shapes and show that this combination can significantly improve performance over AI methods,” says Goldberg, who is also co-founder and chief scientist of Ambi Robotics and Jacobi Robotics. “This can be immediately useful in industry and is an excellent example of what I call ‘good old-fashioned engineering’ (GOFE).”

Bauza and Bronars say that this work is the result of collaboration between several generations.

“To really demonstrate how vision and touch can be useful together requires building a complete robotic system, which is very difficult for a single person to do in a short amount of time,” Bronars says. “Working together with each other and with Nikhil (Chavan-Dafle PhD ’20) and Yifan (Hou PhD ’21 CMU) and across many generations and labs has really allowed us to build an end-to-end system.”

By Jasper

Leave a Reply

Your email address will not be published. Required fields are marked *