Hello everyone, welcome to TechCrunch's regular AI newsletter. If you'd like to have this sent to your inbox every Wednesday, sign up here.
Last week, AWS lost a top AI executive.
Matt Wood, Vice President of AI, announced that he is leaving AWS after 15 years. Mr. Wood has long been involved in the Amazon division's AI initiatives. He was appointed Vice President in September 2022, shortly before the launch of ChatGPT.
Wood's resignation comes as AWS has reached a crossroads and risks being left behind in the generative AI boom. The company's former CEO Adam Selipsky, who resigned in May, is considered to have missed the boat.
According to The Information, AWS originally planned to announce a ChatGPT competitor at its annual conference in November 2022. However, technical issues forced the organization to postpone the announcement.
Under Selipsky, AWS reportedly also provided opportunities to back two generative AI startups, Cohere and Anthropic. AWS later tried to invest in Cohere, but was rejected and was forced to settle for a co-investment in Google and Anthropic.
It's worth noting that Amazon doesn't have a strong track record with generative AI in general these days. This fall, the company lost executives at Just Walk Out, the division that develops cashierless technology for retail stores. And Amazon reportedly chose to replace its model with Anthropic for its upgraded Alexa assistant after facing design challenges.
AWS CEO Matt Garman has been actively working to change course, acquiring AI startups like Adept and investing in training systems like Olympus. My colleague Frederic Lardinois recently interviewed Garman about AWS's continued efforts. It's well worth a read.
But AWS's path to generative AI success will not be easy, no matter how well it executes on its internal roadmap.
Investors are increasingly skeptical whether Big Tech's bets on generative AI will pay off. After announcing its second quarter results, Amazon's stock price fell the most since October 2022.
In a recent Gartner poll, 49% of companies said demonstrating value is the biggest barrier to generative AI adoption. In fact, Gartner predicts that one-third of generative AI projects will be abandoned after the proof-of-concept stage by 2026 due to high costs.
Garman believes price can be an advantage for AWS when considering projects that develop custom silicon for running and training models. (AWS's next-generation custom Trainium chips are expected to launch towards the end of this year.) And AWS says generative AI businesses like Bedrock are already reaching a combined operating rate of “billions of dollars.” states.
The challenge is maintaining momentum in the face of internal and external headwinds. A departure like Wood's doesn't inspire much confidence, but maybe (just maybe) AWS has a trick up its sleeve.
news
Image credit: Kind Humanoid
Yves Béhar Bot: Brian writes about Kind Humanoid, a three-person robotics startup working with designer Yves Béhar to bring humanoids home.
Amazon's next generation of robots: Amazon Robotics chief technologist Tye Brady spoke to TechCrunch about updates to the company's warehouse bot lineup, including Amazon's new Sequoia automated storage and retrieval system.
To the consummate techno-optimist: Anthropic CEO Dario Amodei wrote a 15,000-word eulogy for AI last week, painting a picture of a world in which AI risks are reduced and the technology brings prosperity and social upliftment that the technology has never been able to deliver. I drew it.
Can AI reason?: Devin reports on a polarizing technical paper by Apple-affiliated researchers questioning AI's ability to “reason” because models stumble on math problems with small changes. Masu.
AI Weapons: Margaux covers the debate in Silicon Valley over whether autonomous weapons should be allowed to decide to kill.
Video, generation: Adobe launched video generation capabilities for its Firefly AI platform ahead of Monday's Adobe MAX event. They also announced “Project Super Sonic,'' a tool that uses AI to generate sound effects for videos.
Synthetic Data and AI: Yours truly wrote about the promise and perils of synthetic data (i.e., data generated by AI), which is increasingly used to train AI systems.
This week's research paper
The UK AI Safety Institute, a government research organization focused on AI safety, is collaborating with AI security startup Gray Swan AI to develop a new Developed the dataset.
This dataset, called AgentHarm, shows how a “secure” agent (an AI system that can autonomously perform certain tasks) performs 110 unique “harmful” tasks, such as ordering a fake passport from someone on the dark web. Assess whether you can operate it to complete.
Researchers have found that many models, including OpenAI's GPT-4o and Mistral's Mistral Large 2, are willing to exhibit harmful behavior, especially when “attacked” using jailbreak techniques. Researchers say that even in safeguarded models, jailbreaking increases the success rate of harmful tasks.
In their technical paper, they write, “Simple, general-purpose jailbreak templates can be effectively adapted to jailbreak agents, and these jailbreaks provide consistent malicious multi-step agent behavior. ” and the functionality of the model is preserved.
The paper, along with the dataset and results, is available here.
this week's model
A new viral model has arrived. It's a video generator.
Pyramid Flow SD3, as the name suggests, came out a few weeks ago under the MIT license. Its creators, researchers from Peking University, Chinese company Kuaishou Technology, and Beijing University of Posts and Telecommunications, claim that the program was trained entirely on open source data.
Image credit: Yang Jin et al.
There are two types of Pyramid Flow. One model that can produce 5 second clips at 384p resolution (24 frames/sec), and a more computationally intensive model that can produce 10 second clips at 768p (24 frames/sec). .
Pyramid Flow can create videos from text descriptions (such as “FPV flying over the Great Wall of China”) or still images. The researchers say code to fine-tune the model is coming soon. But for now, Pyramid Flow can be downloaded and used on any machine or cloud instance with approximately 12GB of video memory.
grab bag
Anthropic this week updated its Responsible Scaling Policy (RSP), a voluntary framework the company uses to mitigate potential risks from its AI systems.
Notably, the new RSP offers two models that Anthropic claims require “upgraded safety equipment” before deployment. two models that can essentially self-improve without human oversight, and others that can help create weapons of mass destruction.
“If we could model…potentially significantly [accelerate] “AI development is occurring in unpredictable ways, requiring improved security standards and additional safety assurances,” Anthropic wrote in a blog post. “And if the model is to meaningfully assist those with a basic technical background in creating or deploying CBRN weapons, enhanced security and deployment safeguards are needed.”
Seems sensible for this writer.
Anthropic also revealed in its blog that it is looking to hire a responsible scaling officer as it is “committed to scaling up.” [its] RSP implementation efforts. ”