Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here's a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn't cover on our own.
By the way, TechCrunch is planning to launch an AI newsletter soon. stay tuned. In the meantime, he will increase the frequency of his semi-regular AI columns from twice a month (or so) to weekly. So keep an eye out for more editions to come.
This week in the AI space, OpenAI once again dominated the news cycle (despite Google's best efforts) with not only a product announcement, but also some palace intrigue. The company announced its most capable generative model to date, his GPT-4o, and a few days later announced a team working on the problem of developing controls to prevent cheating in “hyperintelligent” AI systems. It was effectively disbanded.
As expected, the team's dissolution made a lot of headlines. Reports, including ours, show that OpenAI deprioritized the team's safety research in favor of new product launches, like his aforementioned GPT-4o, and ultimately It has been suggested that this led to the resignation of two co-leaders, Jan Leike and his OpenAI co-founder Ilya Sutskever.
For now, superintelligent AI is more theoretical than real. It is not clear when or if the technology industry will achieve the breakthroughs needed to develop AI that can accomplish every task that humans can. But this week's reports seem to confirm that OpenAI's leadership, particularly his CEO Sam Altman, is increasingly choosing to prioritize product over safety measures.
Altman reportedly “infuriated” Sutskever by rushing to announce AI-powered features at OpenAI's first development conference last November. And he criticized Helen Toner, director of Georgetown's Center for Security and Emerging Technologies and former member of OpenAI's board of directors, over a paper she co-authored that cast OpenAI's approach to safety in a critical light. It is said that it was a target. He tried to push her off the board.
Over the past year or so, OpenAI has filled its chatbot store with spam, harvested data from YouTube (allegedly) in violation of the platform's terms of service, and fed its AI with depictions of pornography and gore. He expressed his ambition to create a Indeed, safety seems to be an afterthought at the company. And more and more OpenAI safety researchers are coming to the conclusion that their work is better supported elsewhere.
Here are some other notable AI stories from the past few days.
OpenAI + Reddit: In other OpenAI news, the company has reached an agreement with Reddit to use social site data to train AI models. While Wall Street welcomed the deal with open arms, Reddit users may not be so happy. Google's AI: Google held its annual I/O developer conference this week, where it announced a slew of AI products. Here, we've compiled them for you, from his video-generating Veo to Google search's AI-organized results to an upgrade to Google's Gemini chatbot app. Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and most recently the co-founder of personalized news app Artifact (recently acquired by TechCrunch's parent company Yahoo), is joining the company's He will join Anthropic as the first Chief Product Officer. He will oversee both the company's consumer and enterprise efforts. His AI for Kids: Anthropic announced last week that it would begin allowing developers to create apps and tools for kids built on its His AI model, as long as they follow certain rules. Notably, rivals like Google won't allow their AI to be integrated into apps aimed at young people. AI Film Festival: AI startup Runway held its second-ever AI Film Festival earlier this month. What about take-home? Some of the most powerful moments in the showcase came not from AI, but from more human elements.
More machine learning
With the departure of OpenAI, AI safety is clearly a top priority this week, but Google Deepmind is working on a new “Frontier Safety Framework.” Essentially, this is an organization's strategy to identify and hopefully thwart runaway capabilities. It doesn't have to be AGI, it could just be some crazy malware generator.
Image credit: Google Deepmind
This framework has three steps. 1. Identify potentially harmful features in your model by simulating the development path. 2. Evaluate the model periodically to detect when it has reached a known “critical functionality level.” 3. Apply mitigation plans to prevent breaches and problematic deployments (by others or by yourself). Details can be found here. It may sound like an obvious course of action, but it's important to formalize it. Otherwise, everyone is just promoting it somehow. That's how you get bad AI.
The Cambridge researchers identify quite different risks. They are rightly concerned about the proliferation of chatbots that are trained on the data of the dead to provide superficial imitations of the dead. You may (like me) find this whole concept somewhat abhorrent, but if you're careful, it can potentially be used for grief management and other scenarios. The problem is that we aren't paying attention.
Image credit: University of Cambridge / T. Hollanek
“This area of AI is an ethical minefield,” says lead researcher Katarzyna Nowaczyk-Basinska. “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology already exists.” We identify positive outcomes and discuss the concept (including fake services) generally in a paper published in Philosophy & Technology. Black Mirror predicts the future again!
In a less creepy application of AI, physicists at MIT are considering useful tools (for them) for predicting the phase and state of physical systems. This is typically a statistical task that can become tedious for more complex systems. However, training a machine learning model on the right data and building on the known material properties of the system can provide a fairly efficient method. Another example of how ML is finding a niche even in advanced science.
At the University of Boulder, we're talking about how AI can be used in disaster management. While the technology may be useful for quickly predicting where resources are needed, mapping damage, and even training responders, people (understandably) don't rely on it in life-or-death scenarios. are hesitant to apply it.
Workshop participants. Image credit: CU Boulder
Professor Amir Bezadan seeks to advance this issue, stating that “Human-centered AI can lead to more effective disaster response by fostering collaboration, understanding, and inclusivity among team members, survivors, and stakeholders.'' “This will lead to recovery practices.” They're still in the workshop stage, but it's important to think deeply about this before trying to automate the distribution of relief supplies after a hurricane, for example.
Finally, an interesting study from Disney Research. This looks at ways to diversify the output of a diffusion image generation model, potentially producing similar results over and over again for some prompts. What is their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled monotonically decreasing Gaussian noise to the conditioning vector during inference, balancing diversity and condition conditioning.” Personally, no more. I couldn't express it well.
Image credit: Disney Research
The result is a much greater variety of image output angles, settings, and overall appearance. You may or may not need this, but it's nice to have the option.