Hello everyone, welcome to TechCrunch's regular AI newsletter. If you'd like to have this sent to your inbox every Wednesday, sign up here.
This week was something of a swan song for the Biden administration.
On Monday, the White House announced sweeping new restrictions on exports of AI chips, which were loudly criticized by tech giants including Nvidia. (NVIDIA's business would be severely affected if the restrictions go into effect as proposed.) And on Tuesday, the administration issued an executive order opening up federal land to AI data centers.
But the obvious question is: Will this move have a lasting impact? Will Trump, who takes office on January 20th, simply repeal Biden's bill? So far, President Trump has shown no intentions in either direction. But he certainly has the power to undo Biden's last AI act.
Biden's export restrictions are scheduled to go into effect after a 120-day comment period. The Trump administration will have wide discretion over how to implement this measure and whether to change it in any way.
President Trump may repeal executive orders regarding federal land use. President Trump's AI and crypto “czar,” former PayPal COO David Sachs, recently vowed to rescind another AI-related Biden executive order that set standards for AI safety and security.
However, there is reason to believe that the next government may not shake things up too much.
In line with Biden's move to free up federal resources for data centers, Trump recently promised fast-track approvals for companies investing at least $1 billion in the United States. He also nominated Lee Zeldin, who vowed to cut regulations he believes are burdensome to businesses. He will lead the EPA.
Aspects of Biden's export controls may persist as well. Some of the regulations target China, which President Trump has publicly said he views as America's biggest rival in AI.
One problem is that Israel is included in the list of countries subject to the AI hardware trade cap. As recently as October, President Trump referred to himself as Israel's “protector” and suggested he was likely to be more tolerant of Israeli military action in the region.
In any case, we will get a clearer picture later this week.
news
Image credit: Bryce Durbin / TechCrunch
ChatGPT, remind me…: Paid users of ChatGPT by OpenAI can now ask their AI assistant to schedule reminders and recurring requests. A new beta feature called Tasks will begin rolling out to ChatGPT Plus, Team, and Pro users worldwide this week.
Meta vs. OpenAI: Executives and researchers leading Meta's AI efforts will develop Meta's proprietary Llama 3 family of models while defeating OpenAI's GPT-4 model, according to a message released by the court on Tuesday. It is said that he was obsessed with that.
OpenAI Board of Directors Expands: OpenAI has appointed Adebayo “Bayo” Ogunlesi, an executive at investment firm BlackRock, to its board of directors. The company's current board bears little resemblance to OpenAI's board in late 2023. Board members fired CEO Sam Altman, but reinstated him days later.
Blaize goes public: Blaize plans to become the first AI chip startup to go public in 2025. Founded in 2011 by former Intel engineers, the company has raised $335 million from investors including Samsung for cameras, drones and other edge chips. device.
Reasoning models that think in Chinese: OpenAI's o1 AI reasoning models may “think” in languages like Chinese, French, Hindi, and Thai even when asked a question in English, but here's why. No one knows.
This week's research paper
A recent study co-authored by Dan Hendricks, an advisor to billionaire Elon Musk's AI company xAI, suggests that many AI safety benchmarks correlate with the capabilities of AI systems. . In other words, as the overall performance of the system improves, the benchmark “scores better” and the model appears “more secure.”
“Our analysis shows that many AI safety benchmarks (about half) often incorrectly capture common features and underlying factors closely related to raw training compute. ” wrote the researchers behind the study. “Overall, it is difficult to avoid measuring the capabilities of upstream models in AI safety benchmarks.”
In this study, the researchers propose what they describe as an empirical basis for developing “more meaningful” safety metrics, which they hope will become a reality.[advance] The “science” of safety assessment in AI.
this week's model
Sakana AI likens the adaptability of new AI methods to an octopus. Image credit: Fish AI
Japanese AI company Sakana AI detailed Transformer² (“Transformer Squared”), an AI system that dynamically adjusts to new tasks, in a technical paper published Tuesday.
Transformer² first analyzes a task (such as writing code) to understand its requirements. It then applies “task-specific adaptations” and optimizations to tailor it to that task.
Sakana says the methodology behind Transformer² can be applied to open models such as Meta's Llama, and “gives us a glimpse into a future where AI models are no longer static.”
grab bag
Flowchart showing the architecture of PrAIvateSearch. Image credit: PrAIvateSearch
A small team of developers has released open alternatives to AI-powered search engines like Perplexity and OpenAI's SearchGPT.
The project, called PrAIvateSearch, is available on GitHub under the MIT license. This means that you can use it without restrictions in most cases. It leverages openly available AI models and services, including Alibaba's Qwen family model and search engine DuckDuckGo.
The PrAIvateSearch team says its goal is to “implement functionality similar to SearchGPT,” but in an “open source, local, and private manner.” Check out our team's latest blog post for tips to get up and running.