UK-based self-driving car startup Wayve started life as a software platform installed in a small electric “car” called the Renault Twizy. Wearing cameras, the company's co-founders and PhD candidates Alex Kendall and Amar Shah tweak the deep learning algorithms that power the car's self-driving system, allowing it to drive unmanned through a medieval city. It was developed until it became.
There was no need for fancy LiDAR cameras or radar. They suddenly realized that they were thinking something.
Fast forward to today, Wayve, now an AI modeling company, raised $1.05 billion in a Series C funding round led by SoftBank, NVIDIA, and Microsoft. This makes this the largest ever AI fundraising in the UK and puts it in the top 20 AI fundraising globally. Even his AI head at Meta, Yann LeCun, invested in the company at a young age.
Wayve now plans to sell its self-driving models to various automotive OEMs and manufacturers of new autonomous robots.
Image credit: Wayve CEO, Alex Kendall
An exclusive interview with Wayve co-founder and CEO Alex Kendall about how the company trains its models, new funding, licensing plans, and the broader self-driving market. Told.
(Note: The following interview has been edited for length and clarity)
TechCrunch: What tipped the balance to achieve this level of funding?
Kendall: Seven years ago, we founded the company to build embodied AI.We have worked hard to build technology […] What happened last year is that everything really started to move. […] All the elements you need to make this product dream a reality [came together]especially the first opportunity to deploy embodied AI at scale.
Today, mass-produced cars are equipped with GPUs, ambient cameras, radar, and, of course, the desire to deploy AI to enable acceleration from driver assistance to autonomous driving. This funding is therefore a validation of our technological approach and gives us the funds to turn this technology into a product and bring it to market.
You'll soon be able to buy a new car powered by Wave's AI. […] And this will lead to the realization of all kinds of embodied AI, including not just cars but other forms of robotics. I think what we want to accomplish here is use language models and chatbots to go far beyond the current state of AI. To truly realize a future where we can trust and delegate tasks to intelligent machines and, of course, where they can improve our lives. Autonomous driving will be the first example.
TC: How have you been training self-driving models over the past few years?
Kendall: We partnered with Adsa and Ocado to collect data for trial autonomy. This is a great way for us to get this technology off the ground and will continue to be a very important part of our growth story.
TC: What are your plans for licensing AI to OEMs and automakers? What are the benefits?
Kendall: We want every automaker in the world to be able to work with our AI across a variety of sources, of course. More importantly, it captures diverse data from different vehicles and markets to create the most intelligent and capable body-based AI.
TC: Which car manufacturers did you sell to? Who did you land on?
Kendall: We work with some of the top 10 car manufacturers in the world. I'm not ready to announce who they are today.
TC: What moved SoftBank and other investors about your technology? Was it because it was virtually platform independent and every car now has cameras all around it?
Kendall: That's about right.SoftBank releases public comments on focus on AI, robotics, and autonomous driving [tech] This is exactly the intersection. What we've seen so far with the AV 1.0 approach is to put the infrastructure, HD maps, etc. all into a very constrained setting to demonstrate this technology. However, there is a long way to go from there to something that can be deployed at scale.
We have found that by deploying this software and millions of diverse vehicles around the world, SoftBank and Wave are perfectly aligned in our vision of achieving autonomy at scale. . In addition to business, we are able to deploy AVs at scale by acquiring a wide range of data from around the world to train and validate safety cases and drive 'hands on, eyes off' around the world. You can also.
This architecture operates using onboard intelligence to make its own decisions. It is trained on videos as well as language, and general purpose reasoning and knowledge is also built into the system. So you can deal with long-tail unexpected events you encounter on the road. This is the path we will take.
TC: Where do you see yourself at this point in terms of what's already rolled out in the world?
Kendall: There's been a lot of really exciting evidence, but autonomous driving has pretty much plateaued in the last three years and there's been a lot of consolidation in the AV industry. What this technology represents, what AI represents, is that it's completely game-changing. This allows you to drive without the costs and expenses associated with LiDAR and HD. This allows the onboard intelligence to work. It can handle the complexities of unclear lane markings, cyclists and pedestrians, and is intelligent enough to predict how others will move, allowing it to negotiate and maneuver in even the tightest spaces. This makes it possible to bring technology into cities without causing anxiety or road rage, and to drive in a way that is compliant with driving culture.
TC: A long time ago, you carried out your first experiments by installing a camera in a Renault Tigi. What happens when car manufacturers install lots of cameras around cars?
Kendall: Automakers are already developing vehicles that make this possible. I won't name the brands, but please choose your favorite brand. Especially in luxury cars, they come with surround cameras, surround radar, and onboard GPUs. All of which makes this possible. And now that we have Software Defined Vehicles, we can now do over-the-air updates and retrieve data from the vehicle.
TC: What was your “strategy”?
Kendall: We started a company with all the pillars we needed to build. Our strategy was AI, people, data and computing. On the talent side, we're building a brand at the intersection of AI and robotics, and we're fortunate to bring together some of the best talent from around the world to tackle this problem. Microsoft is a long-time partner of ours, and the amount of GPU compute they provide in Azure will allow us to train models at scales we've never seen before. Truly huge, embodied AI models that can actually build the safe and intelligent behavior needed for this problem. And of course, NVIDIA. The company's chips are best-in-class on the market today and enable the deployment of this technology.
TC: Is all the training data you get from the brands you work with mixed into the model?
Kendall: That's right. That's exactly the model we were able to prove. No single car manufacturer produces sufficiently safe models on its own. Being able to train AI based on data from different car manufacturers is safer and performs better than using just one car manufacturer. It will appear in more markets.
TC: So you're effectively the owner of probably the largest amount of driving training data in the world?
Kendall: That's certainly our ambition. But we want this AI to go beyond driving and become like true embodied AI. This is the first visual, verbal, and behavioral model that can drive a car. It is trained not only on driving data, but also on internet-scale text and other sources. Additionally, we train a model based on a PDF document from the UK government showing the motorway code. Access various data sources.
TC: Does that mean it's not just cars, but robots as well?
Kendall: That's right. We build embodied AI foundation models as general-purpose systems trained on highly diverse data. Consider household robots.data [from that] are diverse. It's not a restricted environment like manufacturing.
TC: How do you plan to expand the company?
Kendall: We continue to grow our AI, engineering and product teams here as well. [in the U.K.] We just started a small team in Silicon Valley and also in Vancouver. We don't intend to “grow” the company, but we do aim for disciplined and purposeful growth.Head office remains in the UK
TC: Where do you think the centers of talent and innovation in AI are located in Europe?
Kendall: It's pretty hard to find a place outside of London. I think London is by far the dominant place in Europe. We're based in London, Silicon Valley and Vancouver, probably in the top five or six hubs in the world. London has been a great place for us so far. We originally grew out of academic innovation at Cambridge. Our next chapter now is on the path less trodden. But from where we are right now, it's a great ecosystem. [in the U.K.].
There's a lot of good stuff going on when it comes to businesses, law, and taxes. On the regulatory side, we have been working with the government for the past five years on new legislation for autonomous driving in the UK. The legislation has passed the House of Lords and almost passed the House of Commons, so it should become law soon.The ability for the government to work with us on this to make all of this legal in the UK. […] We really worked hard for that and had over 15 ministers visit. It's been a really great partnership so far and we definitely feel the support of the government.
TC: Do you have any comments on the EU approach to autonomous driving?
Kendall: Autonomous driving is not part of the AI Act. This is a specific industry and needs to be regulated as such, with subject matter experts. It's not a disjointed catch-all, so I'm glad about that. This is not the fastest way to innovate in a particular field. I think we can do this responsibly by working with specific motor vehicle regulators who understand the problem area. Sector-specific regulations are therefore very important. I'm glad that the EU has adopted such an approach to autonomous driving.