Hey everyone, welcome to TechCrunch's regular AI newsletter.
A survey released this week shows that Gen Z, a group that regularly receives attention in mainstream media, has very mixed opinions about AI.
Samsung recently surveyed more than 5,000 Gen Zers in France, Germany, South Korea, the UK and the US about their opinions on AI and technology in general, and found that around 70% consider AI to be a “go-to” resource not only for work-related tasks like summarizing documents, meetings and conducting research, but also for non-work-related tasks like finding inspiration and brainstorming.
But a report released earlier this year by professional essay writing service EduBirdie found that more than a third of Gen Zers who use OpenAI's chatbot platform ChatGPT and other AI tools at work feel guilty about doing so. Respondents expressed concern that AI could limit their critical thinking and stifle creativity.
Of course, these two survey results should be taken with a pinch of salt. Samsung is not playing it safe; it sells and develops a number of AI-powered products, so it has a vested interest in portraying AI in a generally positive light. EduBirdie is no different; its main business competes directly with ChatGPT and other AI writing assistants. The company will no doubt want people to be wary of AI, especially AI apps that offer essay tips.
But while Gen Z is reluctant to completely discount or boycott AI (even if that's possible), they may be more aware than previous generations of the potential impacts of AI, and technology in general.
Another survey by the National Association of High School Scholars, an academic honor society, found that a majority of Gen Zers (55%) believe AI will have more negative than positive effects on society over the next decade: 55% believe AI will have a significant impact on personal privacy, but not a positive one.
And Gen Z's opinion matters: A NielsenIQ report predicts that Gen Z will soon be the wealthiest generation in history, with their spending potential reaching $12 trillion by 2030 and surpassing that of Baby Boomers by 2029.
Some AI startups spend more than 50% of their revenue on hosting, computing, and software (according to data from accounting firm Kruze). Making every dollar count and easing Gen Z's fears about AI is a smart business strategy. Whether AI can ease their fears remains to be seen, as there are many technical, ethical, and legal challenges. But the least companies can do is try it. It never hurts to try.
news
OpenAI Deals with Condé: OpenAI has inked a deal with Condé Nast, publisher of well-known media outlets like The New Yorker, Vogue, and Wired, to feature the publisher's articles on OpenAI's AI-powered chatbot platform ChatGPT and search prototype SearchGPT, as well as train its AI on Condé Nast content.
AI Demand Threatens Water Supplies: The AI boom is driving demand for data centers, which in turn is driving up water consumption. According to the Financial Times, Virginia, home to the world's largest concentration of data centers, saw its water usage increase by nearly two-thirds between 2019 and 2023, from 1.13 billion gallons to 1.85 billion gallons.
Review of Gemini Live and Advanced Voice Mode: Two new AI-powered, voice-centric chat experiences were released this month by tech giants: Gemini Live from Google and Advanced Voice Mode from OpenAI. Both feature lifelike voices and the freedom to interrupt bots at any time.
Trump Re-shares Taylor Swift Deepfakes: On Sunday, former President Donald Trump posted a collection of memes to Truth Social that made it seem like Taylor Swift and her fans were endorsing his candidacy. But my colleague Amanda Silberling writes that these images could have deeper implications for the use of AI-generated images in political campaigns as new laws go into effect.
The big debate over SB 1047: The California bill known as SB 1047 is intended to forestall real-world disasters caused by AI, but it continues to attract prominent critics. Most recently, Rep. Nancy Pelosi issued a statement in opposition, calling the bill “well-intentioned” but “based on ignorance.”
Research Paper of the Week
Transformers, proposed by a team of Google researchers in 2017, have become the most mainstream AI generative model architecture to date. Transformers are the basis for OpenAI's video generation model Sora, the latest version of Stable Diffusion and Flux, and are also at the core of text generation models such as Anthropic's Claude and Meta's Llama.
And now Google is using them to recommend songs.
In a recent blog post, a team from Google Research, one of the company's many research and development divisions, detailed the new(ish) Transformer-based system behind YouTube Music's recommendations, which they say is designed to take in signals like the “intent” of a user's action (e.g., pausing a track), the “saliency” of that action (e.g., the play rate of a track), and other metadata to surface related tracks that the user is likely to like.
Google says that its Transformer-based recommender “significantly” reduced music skip rates and helped users spend more time listening to music. That sounds like a win for Google (no pun intended).
Model of the Week
While not entirely new, OpenAI's GPT-4o is my model of the week choice because it can now be fine-tuned with custom data.
On Tuesday, OpenAI rolled out tweaks to GPT-4o, allowing developers to customize the structure and tone of the model's responses with their own datasets, as well as make the model follow “domain-specific” instructions.
Fine-tuning is not a panacea, but as OpenAI notes in the blog post announcing the feature, it can have a significant impact on your model's performance.
Grab Bag
One day a new copyright lawsuit over generative AI emerges, this time involving Anthropic.
A group of authors and journalists filed a class action lawsuit in federal court this week against Anthropique, alleging that the company committed “grand theft” when training its AI chatbot, Claude, to pirate e-books and articles.
Anthropic “has built a multibillion-dollar business on stealing hundreds of thousands of copyrighted books,” the plaintiffs say in their complaint, “and by purchasing legal copies or borrowing them from book-buying libraries, book-owners are paying at least some compensation to authors and creators.”
Most models are trained with data taken from public websites and web datasets. Companies argue that fair use protects their efforts to indiscriminately collect data and use it to train commercial models. However, many copyright holders disagree, and they too have filed lawsuits to stop the practice.
This latest lawsuit against Anthropic accuses the company of using The Pile, a collection of datasets that includes a vast library of pirated e-books called Books3. Anthropic recently acknowledged to Vox that it included The Pile among its datasets in Claude's training set.
The plaintiffs seek an unspecified amount of damages and an injunction permanently enjoining Anthropic from unauthorized use of the authors' copyrighted material.