Long interested in the interactions between robots and humans, researchers in the Creative Machines Lab at Columbia Engineering have been working for five years to create EVA, a new autonomous robot with a soft and expressive face that responds to match the expressions of nearby humans. The research will be presented at the ICRA conference on May 30, 2021, and the robot blueprints are open-sourced on Hardware-X. Hod Lipson, James and Sally Scapa Professor of Innovation (Mechanical Engineering) and director of the Creative Machines Lab commented:

“The idea for EVA took shape a few years ago when my students and I began to notice that the robots in our lab were staring back at us through plastic, googly eyes.”

While this sounds simple, creating a convincing robotic face has been a formidable challenge for roboticists. For decades, robotic body parts have been made of metal or hard plastic, materials that were too stiff to flow and move the way human tissue does. Robotic hardware has been similarly crude and difficult to work with—circuits, sensors, and motors are heavy, power-intensive, and bulky.

The first phase of the project began in Lipson’s lab several years ago when undergraduate student Zanwar Faraj led a team of students in building the robot’s physical “machinery.” They constructed EVA as a disembodied bust that bears a strong resemblance to the silent but facially animated performers of the Blue Man Group. EVA can express the six basic emotions of anger, disgust, fear, joy, sadness, and surprise, as well as an array of more nuanced emotions, by using artificial “muscles” that pull on specific points on EVA’s face, mimicking the movements of the more than 42 tiny muscles attached at various points to the skin and bones of human faces.

To overcome this challenge, the team relied heavily on 3D printing to manufacture parts with complex shapes that integrated seamlessly and efficiently with EVA’s skull. After weeks of tugging cables to make EVA smile, frown, or look upset, the team noticed that EVA’s blue, disembodied face could elicit emotional responses from their lab mates.

While lifelike animatronic robots have been in use at theme parks and in movie studios for years, Lipson’s team made two technological advances. EVA uses deep learning artificial intelligence to “read” and then mirror the expressions on nearby human faces. And EVA’s ability to mimic a wide range of different human facial expressions is learned by trial and error from watching videos of itself.

The most difficult human activities to automate involve non-repetitive physical movements that take place in complicated social settings. Boyuan Chen, Lipson’s PhD student who led the software phase of the project, quickly realized that EVA’s facial movements were too complex a process to be governed by pre-defined sets of rules. To tackle this challenge, Chen and the second team of students created EVA’s brain using several Deep Learning neural networks. The robot’s brain needed to master two capabilities: First, to learn to use its own complex system of mechanical muscles to generate any particular facial expression, and, second, to know which faces to make by “reading” the faces of humans.

Chen and the team filmed hours of footage of EVA making a series of random faces. Then, like a human watching herself on Zoom, EVA’s internal neural networks learned to pair muscle motion with the video footage of its own face.

The researchers note that EVA is a laboratory experiment, and mimicry alone is still a far cry from the complex ways in which humans communicate using facial expressions. Such enabling technologies could someday have beneficial, real-world applications.

Tags: , , , , , , , , , , , , , , , , , , , , , ,
Nikoleta Yanakieva Editor at DevStyleR International