Algorithms allow robots to avoid obstacles and run in nature

A team of researchers from the University of California, San Diego has developed a new system of algorithms that allows four-legged robots to walk and run in nature. Robots can navigate difficult and complex terrain while avoiding static and moving obstacles.

The team performed tests where a robot was guided by the system to maneuver autonomously and quickly over sandy surfaces, gravel, grass and bumpy dirt hills covered in branches and fallen leaves. At the same time, it could avoid hitting poles, trees, shrubs, rocks, benches and people. The robot has also demonstrated its ability to navigate a busy office space without encountering various obstacles.

Build efficient legged robots

The new system means that researchers are closer than ever to building efficient robots for search and rescue missions, or robots for collecting information in hard-to-reach or human-dangerous spaces.

The work must be presented to the 2022 International Conference on Intelligent Robots and Systems (IROS) from October 23 to 27 in Kyoto, Japan.

The system gives the robot more versatility due to its combination of the robot’s sense of sight with proprioception, which is another sensing modality that involves the robot’s sense of movement, direction, speed, direction, and direction. location and touch.

Most current approaches to training legged robots to walk and navigate use either proprioception or vision. However, the two are not used at the same time.

Combining proprioception with computer vision

Xiaolong Wang is a professor of electrical and computer engineering at UC San Diego Jacobs School of Engineering.

“In one case, it’s like training a blind robot to walk just by touching and feeling the ground. And in the other, the robot plans its leg movements based on sight alone. It’s not learning two things at once,” Wang said. “In our work, we combine proprioception with computer vision to allow a legged robot to move efficiently and smoothly – while avoiding obstacles – in a variety of challenging environments, not just well-defined ones.”

The system developed by the team relies on a special set of algorithms to merge real-time image data, which was taken by a depth camera on the robot’s head, with data from sensors on the robot’s legs.

However, Wang said it was a complex task.

“The problem is that during real-world operation there is sometimes a slight delay in receiving images from the camera, so data from the two different detection modalities does not always arrive at the same time,” he explained.

The team addressed this challenge by simulating the mismatch by randomizing the two sets of inputs. The researchers call this technique multimodal delay randomization, and they then used the used and randomized inputs to form a reinforcement learning policy. The approach allowed the robot to make decisions quickly as it navigated, as well as anticipate changes in its environment. These abilities allowed the robot to move and maneuver obstacles faster on different types of terrain, all without the help of a human operator.

The team will now look to make legged robots more versatile so they can operate in even more complex terrain.

“Right now, we can train a robot to do simple movements like walking, running, and avoiding obstacles,” Wang said. “Our next goals are to enable a robot to climb and descend stairs, walk on stones, change direction and jump over obstacles.”

Sharon D. Cole