At Google, robots go to school and learn thanks to AI algorithms

Placeholder while loading article actions

MOUNTAIN VIEW, Calif. — Researchers at Google’s lab recently asked a robot to build a hamburger out of various plastic toy ingredients.

The mechanical arm knew enough to add ketchup after the meat and before the lettuce, but thought the correct way to do it was to put the entire bottle inside the burger.

While this robot won’t be working as a line cook anytime soon, it’s representative of a bigger breakthrough announced by Google engineers on Tuesday. Using newly developed artificial intelligence software known as Big Language Models, researchers say they have been able to design robots that can help humans with a wider range of everyday tasks.

Instead of providing a long list of instructions – directing each of the robot’s movements one by one – robots can now respond to complete requests, more like a human.

During a demonstration last week, a researcher said to a robot, “I’m hungry, can you bring me a snack?” The robot then searched a cafeteria, opened a drawer, found a bag of chips and brought it to the human.

This is the first time that language models have been embedded in robots, say Google executives and researchers.

“It’s fundamentally a different paradigm,” says Google researcher Brian Ichter, one of the authors of a new paper published Tuesday outlining the company’s progress.

Robots are already commonplace. Millions of them work in factories around the world, but they follow specific instructions and usually focus on one or two tasks, such as moving a product through the assembly line or welding two pieces of metal together. The race to build a robot that can perform a range of daily tasks and learn on the job is much more complex. Tech companies big and small have worked for years to build such versatile robots.

Big Tech is building AI with bad data. So scientists looked for better data.

Language models work by taking huge amounts of text downloaded from the internet and using it to train artificial intelligence software to guess what kinds of answers might come after certain questions or comments. Models have become so good at predicting the correct answer that engaging with someone often feels like having a conversation with a knowledgeable human. Google and other companies, including OpenAI and Microsoft, have devoted resources to building better models and training them on ever-larger sets of texts, in multiple languages.

The book is controversial. In July, Google fired one of its employees who said he thought the software was sensitive. The consensus among AI experts is that the models are not sensitive, but many fear they may have bias because they were trained on huge amounts of unfiltered, human-generated text.

Some language patterns have been shown to be racist or sexist, or easily manipulated into spouting hate speech or lies when prompted with the right statements or questions.

In general, language models could give robots knowledge of high-level planning steps, said Carnegie Mellon assistant professor Deepak Pathak, who studies AI and robotics and was commenting on the field, not Google specifically. But these models won’t give robots all the information they need – for example, how much force to apply when opening a refrigerator. This knowledge must come from elsewhere.

“It only solves the high-level planning problem,” he said.

Still, Google is moving forward and has now merged the language models with some of its bots. Now, instead of having to encode specific technical instructions for each task a robot can perform, researchers can simply talk to them in everyday language. More importantly, the new software helps robots analyze complex, multi-step instructions on their own. Now bots can interpret instructions they’ve never heard before and come up with responses and actions that make sense.

These robots have been trained in AI. They have become racist and sexist.

Robots that can use language models could change the way manufacturing and distribution facilities work, said Zac Stewart Rogers, assistant professor of supply chain management at Colorado State University.

“A human and a robot working together are always the most productive” now, he said. “Robots can manually lift heavy loads. Humans can do the nuanced troubleshooting.

If robots were able to understand complex tasks, it could mean that fulfillment centers could be smaller, with fewer humans and more robots. That could mean fewer jobs for people, though Rogers points out that typically when there’s a contraction due to automation in one area, jobs are created in other areas.

It’s also probably still a long way off. Artificial intelligence techniques such as neural networks and reinforcement learning have been used for years to train robots. This has led to breakthroughs, but progress is still slow. Google’s robots are nowhere near ready for the real world, and in interviews Google researchers and executives have repeatedly said that they’re just running a research lab and don’t yet have the intention to commercialize the technology.

But it’s clear that Google and other Big Tech companies are serious about robotics. Amazon uses many robots in its warehouses, is experimenting with drone delivery and earlier this month agreed to buy robot vacuum maker Roomba for $1.7 billion. (Amazon founder Jeff Bezos owns The Washington Post).

Tesla Says It’s Building a “Friendly” Robot That Will Perform Menial Tasks, Won’t Fight Back

Tesla, which has developed self-driving features for its cars, is also working on general-purpose robots.

In 2013, Google went on a spending spree by buying several robotics companies, including Boston Dynamics, the maker of robot dogs that often go viral on social media. But the executive responsible for the program was accused of sexual misconduct and left the company soon after. In 2017, Google sold Boston Dynamics to Japanese telecommunications and tech investment giant Softbank. The hype around ever-smarter robots designed by the most powerful tech companies has died down.

As part of the Language Model Project, Google researchers worked alongside those at Everyday robots, a separate but wholly owned company within Google that works specifically on building robots that can perform a range of “repetitive” and “arduous” tasks. Robots are already hard at work in various Google cafeterias, wiping down counters and dumping trash.


The Google researchers worked alongside those at Everyday Robots, a separate but wholly owned company within Google. An earlier version of this story incorrectly named the company. This story has been updated.

Sharon D. Cole