Hello everyone and welcome to TechCrunch's regular AI newsletter. If you'd like to receive this newsletter in your inbox every Wednesday, sign up here.
This week in AI, two startups building code-generating and suggesting tools, Magic and Codeium, raised a combined total of nearly $500 million. The round is big even by AI sector standards, considering Magic has yet to release a product or generate revenue.
So why are investors excited? Because coding is not an easy business, nor is it a cheap business, and there is demand from both enterprises and individual developers for ways to streamline some of the more difficult processes involved in coding.
One study found that the average developer spends about 20% of their workweek maintaining existing code instead of writing new code, and another said that companies lose $85 billion a year in lost opportunity costs due to excessive code maintenance (including dealing with technical debt and fixing poorly performing code).
Many developers and companies believe that AI tools can help here. And FYI, consultants agree: In a 2023 report, McKinsey analysts wrote that AI coding tools could help developers write new code in half the time and optimize existing code in about two-thirds the time.
Coding AI is no panacea. The McKinsey report also found that more complex workloads, such as those that require familiarity with specific programming frameworks, don't necessarily benefit from AI. In fact, the report's co-authors found that it takes novice developers longer to complete some tasks with and without AI.
“Attendees' feedback showed that developers were proactively iterating on the tool to achieve this. [high] “Maintaining code quality requires developers to understand the attributes that constitute quality code and to drive appropriate output from tools,” the co-authors write.
AI coding tools also have unresolved security and IP-related issues: Some analyses have shown that over the past few years, these tools have increasingly pushed erroneous code into codebases, while code-generators trained on copyrighted code have been seen to repeat that code when prompted in certain ways, creating liability risks for developers who use them.
But that hasn’t dampened developers’ – or their employers’ – enthusiasm for AI coding.
A majority of developers (over 97%) say they are adopting AI tools in some form in a 2024 GitHub survey, and the same survey found that 59% to 88% of companies encourage or currently allow the use of assistive programming tools.
So it's not all that surprising that the AI coding tools market could be worth around $27 billion by 2032 (according to Polaris Research), especially if Gartner predicts that 75% of enterprise software developers will use AI coding assistants by 2028.
The market is already booming: Generative AI coding startups Cognition, Poolside, and Anysphere have closed huge funding rounds in the past year, and GitHub's AI coding tool Copilot has more than 1.8 million paying users. The productivity gains these tools bring have been enough to convince investors and customers to ignore their flaws. But it remains to be seen whether this trend will continue, and for how long exactly.
news
“Emotion AI” attracts investment: Julie writes about how some VCs and companies are attracted to “emotion AI,” the more sophisticated sibling of sentiment analysis, and what problems this could cause.
Why Domestic Robots Still Ain't Good: Brian explores why so many attempts at domestic robotics have failed spectacularly. He says it comes down to price, functionality and effectiveness.
Amazon Hires Covariant Founder: Speaking of robots, Amazon last week hired the founder of robotics startup Covariant and “about a quarter” of its employees. It also signed a non-exclusive license to use Covariant's AI robotics models.
NightCafe, the original image generator: I covered NightCafe, one of the original image generators and a marketplace for AI-generated content. Despite moderation challenges, NightCafe is still going strong.
Midjourney Enters Hardware: NightCafe competitor Midjourney is entering hardware. The company made the announcement in a post on X, noting that its new hardware team will be based in San Francisco.
SB 1047 Passes: The California Assembly has passed AI bill SB 1047. Max writes about why some people hope the Governor doesn't sign it.
Google Rolls Out Election Countermeasures: Google is preparing for the US presidential elections by rolling out countermeasures to more of its generative AI apps and services. As part of the restrictions, most of the company's AI products will not respond to election-related topics.
Apple and Nvidia May Invest in OpenAI: Nvidia and Apple are reportedly in talks to invest in ChatGPT maker OpenAI's next funding round, which could value the company at $100 billion.
Research Paper of the Week
If you have AI, you don't need a game engine.
Researchers from Tel Aviv University and Google's AI research and development division DeepMind last week previewed GameNGen, an AI system that can simulate the game Doom at up to 20 frames per second. Trained on vast amounts of Doom gameplay footage, the model can effectively predict the next “game state” as the player “controls” a character in the simulation, which is a game that is generated in real time.
An AI-generated Doom-like level. Image credit: Google
GameNGen isn't the first such model: OpenAI's Sora can simulate games including Minecraft, and a group of university researchers announced an AI that simulates Atari games earlier this year. (Other models in this direction range from World Models to GameGAN to Google's own Genie.)
But in terms of performance, GameNGen is one of the most impressive attempts at game simulation to date. The model is not without significant limitations, such as graphical glitches and the inability to “remember” more than 3 seconds of gameplay (meaning you can't create a functional game with GameNGen), but it could be the first step towards an entirely new kind of game: procedurally generated games on steroids.
Model of the Week
As my colleague Devin Caldway has written before, AI is taking over the field of weather forecasting, from simple predictions like “how long will this rain last” to 10-day outlooks and even century-level forecasts.
One of the newest models, Aurora, is the product of Microsoft's AI research organization. Trained on a variety of weather and climate datasets, Microsoft claims Aurora can be fine-tuned to specific forecasting tasks with relatively little data.
Image credit: Microsoft
“Aurora is a machine learning model that can predict atmospheric variables, such as temperature,” Microsoft explains on the model's GitHub page, “and offers three dedicated versions: medium-resolution weather forecasts, high-resolution weather forecasts, and air pollution forecasts.”
Aurora's performance appears to be quite good compared to other air-tracking models (it can produce a five-day global air pollution forecast or a 10-day high-resolution weather forecast in under a minute). But it's not immune to the hallucinatory tendencies of other AI models: Because Aurora can make mistakes, Microsoft warns that “individuals and businesses shouldn't use it to plan their operations.”
Grab Bag
Last week, Inc. reported that Scale AI, an AI data labeling startup, had laid off many of its annotators who were responsible for labeling training datasets used to develop AI models.
As of this writing, there has been no official announcement, but a former employee told Inc. that hundreds of employees had been laid off (a claim Scale AI disputes).
Most of the annotators who work for Scale AI are not directly employed by the company, but rather by Scale subsidiaries or third-party companies, and job security is low: Labelers can go without work for long periods of time, or be unceremoniously booted off the Scale platform, as happened recently to contractors in Thailand, Vietnam, Poland, and Pakistan.
Regarding last week's layoffs, a Scale spokesperson told TechCrunch that the company hires contract workers through a company called HireArt. “These individuals [i.e., those who lost their jobs] “We are HireArt employees who received severance and COBRA benefits from HireArt through the end of the month. Fewer than 65 were laid off last week. We've built this contract workforce and right-sized it as our operating model has evolved over the last nine months, but fewer than 500 people have been laid off in the U.S.”
It's a bit difficult to interpret exactly what Scale AI means by this carefully worded statement, but we are currently investigating. If you are a former Scale AI employee or a recently terminated contractor, please contact us via any means convenient for you.