When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor Emeritus of Robotics at MIT, Brooks has also co-founded three major companies, including Rethink Robotics, iRobot, and his current endeavor, Robust.ai. Brooks ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade, starting in 1997.
In fact, he likes to make predictions about the future of AI, and then keeps a scorecard on his blog about how well his predictions are coming along.
He knows what he's talking about and thinks it may be time to put the brakes on the hype around generative AI. Brooks believes it's a great technology, but that it may not be as capable as many suggest. “I'm not saying that a law degree isn't important, but you have to be careful. [with] How do you value them?' he told TechCrunch.
The problem with generative AI, he said, is that while it can perform a given set of tasks perfectly, it can't do everything a human can, and humans tend to overestimate its capabilities. “When humans see an AI system perform a task, they quickly generalize that to similar ones and estimate the AI system's capabilities — not just its performance on that task, but its capabilities related to that task,” Brooks said. “And they're usually wildly overoptimistic, because they're using a model of human performance on the task.”
The problem, he added, is that generative AI is not human, not human-like, and it's a mistake to try to attribute human capabilities to it: People think generative AI is so capable that they want to use it even for nonsensical purposes, he said.
Brooks points to his latest company, Robust.ai, a warehouse robotics system, as an example: Someone recently suggested to him that it would be cool and efficient to build an LLM for the system to tell the warehouse robots where to go. But in Brooks' estimation, this is not a reasonable use case for generative AI and would actually slow things down. It's much easier to just hook up the robots to a data stream from the warehouse management software.
“If you have 10,000 orders that need to be shipped within two hours, you need to optimize for that. Language doesn't help, it just slows you down,” he said. “We have big data processing and big AI optimization techniques and planning. That's how you get orders to be fulfilled quickly.”
Another lesson Brooks has learned about robots and AI is not to go overboard: Solve solvable problems that robots can easily integrate into.
“You need to automate in places that are already cleaned. My company's example is that it works really well in warehouses, which are actually quite constrained. The lighting is the same in large buildings. You don't have stuff strewn on the floor because people pushing carts will bump into it. You don't have plastic bags floating around. And it's often not in the interest of the people who work there to be spiteful towards the robots,” he said.
Brooks explains that the idea is for robots and humans to work together, so rather than creating human-like robots, the company designed these robots for practical purposes related to warehouse operations — in this case, the robot looks like a shopping cart with a steering wheel.
“The form factor that we're using is not a walking humanoid. I've built and delivered more humanoids than anyone, and these look like shopping carts,” he said. “They have handlebars so that if the robot has a problem, a person can grab the handlebars and steer it however they want.”
Over the years, Brooks has learned the importance of making technology accessible and purpose-built: “I always try to make technology easy for people to understand so it can be deployed at scale. I also always look at the business case; return on investment is really important.”
Still, Brooks says we need to accept that when it comes to AI, there will always be hard-to-solve edge cases that could take decades to solve: “If we're not careful about how we deploy AI systems, there will always be lots of edge cases that will take decades to find and fix. And paradoxically, all of those fixes will be done by the AI itself.”
Brooks added that there is a false belief, mainly because of Moore's Law, that there is always exponential growth when it comes to technology — if ChatGPT 4 is this good, imagine what ChatGPT 5, 6, and 7 will be like — and that he believes this logic is flawed: technology doesn't always grow exponentially, despite Moore's Law.
He uses the example of the iPod, whose storage capacity has actually doubled over several model years, from 10 GB to 160 GB. If this trend continued, he thought, by 2017 we would see iPods with 160 TB of storage, but of course that didn't happen. Models sold in 2017 actually came in 256 GB or 160 GB, because, as he points out, no one actually needed more storage than that.
Brooks acknowledges that LLMs could potentially be used to perform certain tasks in the world of domestic robotics, especially with an aging population and growing caregiver shortages, but he says that could come with its own challenges.
“People say, 'If we have big language models, robots will be able to do things they couldn't do before,' but that's not the issue. The issues about being able to do things are about optimization in control theory and all kinds of hard-core math,” he said.
Brooks explains that this could eventually lead to robots with language interfaces that are convenient for people in care settings: “It's not useful to tell each individual robot in a warehouse to go and pick up one thing per order, but in home elderly care it could be useful because the person can talk to the robot,” he said.