Professor Francisco Valero-Cuevas and his research team use new artificial intelligence algorithms to teach robots to move by themselves, imitating animals.
BY GRETA HARRISON
March 11, 2019
For a newborn giraffe or wildebeest, being born can be a perilous introduction to the world — predators lie in wait for an opportunity to make a meal of the herd’s weakest member. This is why many species have evolved ways for their juveniles to find their footing within minutes of birth.
It’s an astonishing evolutionary feat that has long inspired biologists and roboticists — and now a team of USC researchers believe they have become the first to create an artificial intelligence (AI)-controlled robotic limb that can trip and recover on its own without being programmed to do so.
Learning on its own
Professor Francisco Valero-Cuevas, and his research team, composed of Ali Marjaninejad, Darío Urbina-Meléndez and Brian Cohn, have developed a bio-inspired algorithm that can learn a new walking task by itself after only five minutes of unstructured play, and then adapt to other tasks without any additional programming.
Their article, outlined in the March cover article of Nature Machine Intelligence, opens exciting possibilities for understanding human movement and disability, creating responsive prosthetics and robots that can interact with complex and changing environments like space exploration and search-and-rescue.
“Nowadays, it takes the equivalent of months or years of training for a robot to be ready to interact with the world, but we want to achieve the quick learning and adaptations seen in nature,” said senior author Valero-Cuevas, a professor at the USC Division of Biokinesiology and Physical Therapy and professor of biomedical engineering at the USC Viterbi School of Engineering.
Marjaninejad, a doctoral candidate in USC’s biomedical engineering program and the paper’s lead author, said this breakthrough is akin to an infant’s natural learnings.
The robot was first allowed to understand its environment in a process of free play, Marjaninejad explained.
“These random movements of the leg allow the robot to build an internal map of its limb and its interactions with the environment,” he added.
The paper’s authors say that, unlike most current work, their robots learn by doing, and without any prior or parallel computer simulations to guide learning.
This is particularly important because programmers can predict and code for multiple scenarios, but not for every possible scenario — thus pre-programmed robots are inevitably prone to failure, Marjaninejad explained.
“However, if you let these [new] robots learn from relevant experience, then they will eventually find a solution that will be put to use and adapted as needed,” he said. “The solution may not be perfect but will be adopted if it is good enough for the situation.”
Through discovering their body and environment, the robot limbs — designed in Valero Cuevas’ Brain-Body Dynamics Lab— use their unique experience to develop the gait pattern that works well enough for them, producing robots with personalized movements.
“You can recognize someone coming down the hall because they have a particular footfall, right?” Valero-Cuevas said. “Our robot uses its limited experience to find a solution to a problem that then becomes its personalized habit, or ‘personality.’ We get the dainty walker, the lazy walker, the champ … you name it.”
An intrepid rescuer, explorer?
The potential applications for the technology are many — particularly in assistive technology, where robotic limbs and exoskeletons that are intuitive and responsive to a user’s personal needs would be invaluable to those who have lost the use of their limbs. “Exoskeletons or assistive devices will need to naturally interpret your movements to accommodate what you need,” Valero-Cuevas said. “Our robots can learn your habits and mimic your movement style for the tasks you need in everyday life — even as you learn a new task, or grow stronger or weaker.”
The research could also have strong applications in space exploration and rescue missions, allowing the robots to do what’s necessary, without being escorted or supervised, as they venture onto a new planet or uncertain and dangerous terrain. These robots would be able to adapt to low or high gravity, loose rocks and even mud, for example.
The paper’s two additional authors, doctoral students Brian Cohn and Darío Urbina-Meléndez weighed in on the research:
“The ability for a species to learn and adapt their movements as their bodies and environments change has been a powerful driver of evolution from the start,” said Cohn, a USC Viterbi computer science doctoral candidate. “Our work constitutes a step towards empowering robots to learn and adapt from each experience, just as animals do.”
“I envision muscle-driven robots, capable of mastering what an animal takes months to learn, in just a few minutes,” said Urbina-Meléndez, a doctoral candidate who believes in the capacity for robotics to take bold inspiration from life. “Our work combining engineering, AI, anatomy and neuroscience is a strong indication that this is possible.”
This research was funded by the National Institutes of Health, the U.S. Department of Defense and the Defense Advanced Research Projects Agency.