Hey everyone, welcome to TechCrunch's regular AI newsletter.
Last Sunday, President Joe Biden announced that he would not seek reelection and instead gave his “full endorsement” to Vice President Kamala Harris as the Democratic nominee. Within days, Harris had secured the support of a majority of Democratic delegates.
Harris has been outspoken on technology and AI policy, but what would that mean for AI regulation in the US if she wins the presidential election?
My colleague Anthony Ha wrote a bit about this over the weekend. Harris and President Biden had previously said they “reject the false choice between protecting our citizens and advancing innovation.” At the time, Biden issued an executive order calling on companies to set new standards for AI development. Harris said the voluntary initiative was “a first step toward a safer AI future, with more to come.” Because “in the absence of regulation or strong government oversight, some technology companies are choosing to prioritize profits over the well-being of their customers, the safety of our communities, and the stability of our democracy.”
We also spoke to AI policy experts, most of whom said they would like to see consistency with a Harris administration, rather than the dismantling of current AI policy and general deregulation that has been advocated by Donald Trump's campaign.
Lee Tiedrich, an AI consultant at the Global Partnership on Artificial Intelligence, told TechCrunch that Biden's endorsement of Harris could “increase the likelihood of maintaining continuity” on U.S. AI policy.[This is] “The 2023 AI Executive Order sets the framework and is marked by multilateralism through the UN, G7, OECD, and other organizations,” she said. “The order and related measures also call for increased government oversight of AI, including increased enforcement, expanded agency AI rules and policies, a focus on safety, and some mandatory testing and disclosure for some large-scale AI systems.”
Sarah Kreps, a Cornell University political science professor with a special interest in AI, said there is a perception in some parts of the tech industry that the Biden administration is being too aggressive in regulating and that the executive orders on AI are “too much micromanagement.” While Kreps doesn't expect Harris to roll back any of the AI safety protocols enacted under the Biden administration, she wonders whether her administration might take a less top-down regulatory approach to appease critics.
Crystal Kaufman, a researcher at the Institute for Distributed AI, agrees with Kreps and Tiedrich that Harris will likely continue Biden's efforts to address the risks of using AI and increase transparency about it, but if she wins, she would like to see her cast a broader net of stakeholders in policymaking, including data workers whose hardships often go unrecognized: low wages, poor working conditions, mental health issues.
“Harris must include the voices of data workers who program AI in the next important conversations,” Kaufman said. “We can't continue to treat closed-door meetings with tech CEOs as a way to shape policy. We're certainly heading in the wrong direction.”
news
Meta Releases New Model: This week, Meta released Llama 3.1 405B, a text generation and analysis model containing 405 billion parameters. Llama 3.1 405B, the largest “open” model to date, is being deployed across Meta platforms and apps, including Meta AI experiences on Facebook, Instagram, and Messenger.
Adobe revamps Firefly: Adobe released new Firefly tools for Photoshop and Illustrator on Tuesday, giving graphic designers more ways to use the company's in-house AI models.
Facial Recognition in Schools: A school in the UK has been formally reprimanded by the UK's data protection regulator for using facial recognition technology to process pupils' face scans without getting their explicit opt-in consent.
Cohere Raises $500 Million: Cohere, a generative AI startup co-founded by former Google researchers, has raised $500 million in new funding from investors including Cisco and AMD. Unlike many of its generative AI startup rivals, Cohere customizes AI models for large enterprises, which has been a key factor in the company's success.
CIA AI Director Interview: As part of TechCrunch's ongoing “Women in AI” series, we interviewed Lakshmi Raman, Director of AI at the CIA, about her journey to the position, the CIA's use of AI, and the balance that must be struck between embracing new technologies and deploying them responsibly.
Research Paper of the Week
Have you heard of Transformers? It's an AI model architecture that's great for complex inference tasks, and is employed in models like OpenAI's GPT-4o, Anthropic's Claude, and more. But while Transformers are powerful, they also have their drawbacks, which is why researchers are investigating alternatives.
One of the most promising candidates is the state-space model (SSM), which combines properties of several older types of AI models, including recurrent and convolutional neural networks, to create a more computationally efficient architecture that can ingest long sequences of data (such as novels or movies). And one of the most powerful SSMs to date, Mamba-2, was detailed in a paper this month by research scientists Tri Dao (Princeton University professor) and Albert Gu (Carnegie Mellon University).
Like its predecessor, Mamba-2 can process larger chunks of input data than comparable Transformer-based models while remaining performance-competitive with them on certain language generation tasks. Dao and Gu suggest that continued improvements in SSM could one day enable generative AI applications that run on commodity hardware and are more powerful than current Transformers can achieve.
Model of the Week
In another recent architecture-related development, a team of researchers have created a new type of generative AI model that they claim can match or even surpass both the most powerful Transformers and Mambas in terms of efficiency.
I'm excited to share with you a project I've been working on for over a year that I believe will fundamentally change how we approach language modeling.
We designed a new architecture that replaces the hidden state of an RNN with a machine learning model. This model… pic.twitter.com/DEcI3nB1xC
— Karan Dalal (@karansdalal) July 8, 2024
The researchers say their architecture, called a training-at-test model (TTT model), is capable of inferring millions of tokens, with the potential to scale to billions of tokens in the future with improved design. (In generative AI, a “token” is raw text or other bite-sized pieces of data.) Because the TTT model can handle many more tokens than traditional models, and can do so without overly taxing hardware resources, the researchers believe it may be suitable for powering the “next generation” generative AI apps.
If you want to learn more about the TTT model, check out our recent feature.
Grab Bag
Stability AI, the generative AI startup that investors including Napster co-founder Sean Parker recently swooped in to save from financial collapse, has been courting considerable controversy over its restrictive new product terms of use and licensing policies.
Until recently, commercial use of Stability AI's latest open AI image model, Stable Diffusion 3, required organizations with less than $1 million in annual revenue to sign up for a “creator” license that limited the total number of images they could generate to 6,000 per month. But a bigger issue for many customers was Stability's restrictive fine-tuning terms, which gave Stability AI the right (or at least it appeared that it had) to charge and control models trained on images generated by Stable Diffusion 3.
Stability AI's heavy-handed approach has led CivitAI, one of the largest hosts of image-generative models, to temporarily ban models based on or trained with Stable Diffusion 3 images while it seeks legal counsel for a new license.
“Our concern is that, in our current understanding, this license gives Stability AI excessive permissions not only for models fine-tuned with Stable Diffusion 3, but also for the use of other models that include Stable Diffusion 3 imagery in their datasets,” CivitAI said in a blog post.
In response to the backlash, Stability AI announced earlier this month that it was adjusting the licensing terms of Stable Diffusion 3 to allow for more liberal commercial use. “Unless used for illegal activity or in clear violation of the license or terms of use, Stability AI will never ask you to remove any resulting images, tweaks, or other derivative products without paying a fee to Stability AI,” Stability clarified in a blog post.
The furor highlights the legal pitfalls that continue to plague generative AI and, relatedly, the extent to which “open” is subject to interpretation. Call me a pessimist, but the rise of controversial and restrictive licenses suggests the AI industry isn't going to reach agreement or move toward clarity anytime soon.