Hello everyone and welcome to TechCrunch's first ever AI newsletter. I'm really excited to type these words. This has been a long time in the making, and I'm excited to finally share it with you all.
With the launch of TC's AI newsletter, we will be discontinuing our semi-regular “This Week in AI” column, formerly known as Perceptron. However, all of our “This Week in AI” analyses, including spotlights on noteworthy new AI models, can still be found here.
This week in AI, OpenAI is running into trouble again.
A group of former OpenAI employees spoke to The New York Times' Kevin Roose about major safety failings within the organization. They, like others who have left OpenAI in recent months, claim that the company hasn't done enough to prevent its AI systems from becoming potentially dangerous, and they accuse OpenAI of employing heavy-handed tactics to stop employees from raising the alarm.
The group published an open letter on Tuesday calling on major AI companies, including OpenAI, to improve transparency and strengthen whistleblower protections. “Without effective government oversight of these companies, current and former employees are the few people who can hold companies accountable to the public,” the letter said.
Call me pessimistic, but I expect the former employees' complaints will be ignored. It's hard to imagine a scenario in which an AI company would not only agree to “support a culture of open criticism,” as the signatories recommend, but also choose not to enforce non-disparagement clauses or retaliate against current employees who choose to speak up.
Consider also that OpenAI's safety committee, created recently in response to early criticism of the company's security practices, is made up entirely of company insiders, including CEO Sam Altman, who previously claimed to know nothing about OpenAI's restrictive non-disparagement covenant but who himself signed the founding documents that enacted it.
Of course, things could change for OpenAI tomorrow, but I wouldn't count on it, and even if it did, it would be hard to trust it.
news
AI Apocalypse: OpenAI's AI-powered chatbot platform ChatGPT went down almost simultaneously this morning, along with Anthropic's Claude, Google's Gemini and Perplexity. All services have since been restored, but the cause of the downtime remains unknown.
OpenAI Explores Fusion: According to the Wall Street Journal, OpenAI is in talks with fusion startup Helion Energy about a deal in which AI companies would buy large amounts of electricity from Helion to power their data centers. Altman has a $375 million stake in Helion and serves on the company's board of directors, but has reportedly removed himself from deal talks.
The cost of training data: TechCrunch examines expensive data licensing agreements that are becoming commonplace in the AI industry and threaten to make AI research unsustainable for smaller organizations and academic institutions.
Hateful Music Generator: Bad actors are misusing AI-powered music generators to create homophobic, racist, and propagandistic songs, and are publishing guides to teach others how to do the same.
Cohere Funding: Enterprise generative AI startup Cohere has raised $450 million in new funding from Nvidia, Salesforce Ventures, Cisco and others, bringing Cohere's valuation to $5 billion, according to Reuters. TechCrunch reports that the round also included participation from Oracle and Thomvest Ventures (both returning investors) and is being kept open, according to people familiar with the matter.
Research Paper of the Week
In a 2023 research paper titled “Let's Validate It Incrementally,” recently introduced on OpenAI's official blog, OpenAI scientists claim to have fine-tuned the startup's general-purpose generative AI model GPT-4 to achieve better-than-expected performance in solving mathematical problems. This approach could make generative models less likely to go off the rails, the paper's co-authors say, but they also point out some caveats.
In the paper, the co-authors detail how they trained a reward model to detect hallucinations, or instances when GPT-4 got a factual or math problem wrong. (A reward model is a specialized model for evaluating the output of an AI model, in this case math-related output from GPT-4.) The reward model gave GPT-4 a “reward” each time it got a step of a math problem right. The researchers call this approach “process supervision.”
The researchers say that process supervision has improved GPT-4's accuracy on math problems, at least in benchmark tests, compared with previous “reward” model approaches. They acknowledge, however, that it's not perfect: GPT-4 still makes mistakes in problem steps. And it's unclear how the form of process supervision the researchers investigated generalizes beyond the domain of mathematics.
Model of the Week
Weather forecasting may not feel like science (at least when you get rained on, like I just did), because it's all about probability, not certainty. And what better way to calculate probability than with a probability model? We've already seen AI being used to forecast the weather on timescales ranging from hours to centuries, and now Microsoft is joining in the fun. Their new Aurora model advances this rapidly evolving field in the AI world, providing global-level forecasts with a resolution of about 0.1° (think 10km by 10km).
Image credit: Microsoft
Trained on over a million hours of weather and climate simulations (not real weather? Hmm…) and fine-tuned for a few desired tasks, Aurora outperforms traditional numerical forecasting systems by several orders of magnitude. Even more impressive, it beats Google DeepMind's GraphCast in its field (though the field was chosen by Microsoft) and can estimate more accurate weather conditions on a one-to-five day scale.
Of course, companies like Google and Microsoft are in the race, vying for users' attention online by providing the most personalized web and search experience, and accurate, efficient, first-party weather forecasts will be a key part of that, at least until we're no longer indoors.
Grab Bag
In a think piece published in the Palladium last month, Avital Barwit, chief of staff at AI startup Anthropic, said that rapid advances in generative AI mean that the next three years may be the last years she and many other knowledge workers have to work. This is not a reason to fear, she says, but rather a cause for relief, because “it's a sign that[lead to] A world where people's material needs are met but they do not have to work.”
“A famous AI researcher said, [this inflection point] “By trying activities we're not particularly good at, like jiu-jitsu or surfing, and enjoying them even when we're not excellent, we prepare ourselves for a future in which we will have to do things out of pleasure rather than necessity, where we will no longer be the best at them, but still have to choose how to spend our days.”
That's certainly an optimistic view, but I don't agree with it.
If generative AI were to replace most knowledge workers within three years (which seems unrealistic given AI's many unsolved technical problems), economic collapse would be quite likely. Knowledge workers make up the majority of the workforce, tend to be high-income earners, and therefore big spenders. They drive the wheels of capitalism forward.
Barwit points to universal basic income and other large-scale social safety net programs, but it’s hard to believe that a country like the United States, which can’t even get basic AI laws in place at the federal level, will institute a universal basic income scheme anytime soon.
With any luck, I'll be wrong.