Keeping up with an industry as rapidly changing as AI can be a challenge. Until AI can do it for you, here's a roundup of recent buzz in the world of machine learning, along with notable research and experiments that we wouldn't have covered on our own.
By the way, TechCrunch will be launching an AI newsletter on June 5th, so stay tuned. In the meantime, we'll be increasing the semi-regular cadence of our AI columns from twice a month to weekly, so keep an eye out for upcoming issues.
This week in AI, OpenAI launched discount plans for nonprofit and education customers and unveiled its latest efforts to stop bad actors from misusing its AI tools. There isn't much to criticize about the company, at least in my opinion, but these announcements seem timely to counter the company's recent bad press.
Let's start with Scarlett Johansson. OpenAI removed one of the voices used in its AI-powered chatbot ChatGPT after users pointed out that the voice sounded eerily similar to Johansson's. Johansson later released a statement saying that she had hired a lawyer to investigate the voice and get exact details of how it was developed. She also said that she had been asked multiple times by OpenAI to give ChatGPT permission to use her voice, but that she had refused.
Now, a Washington Post article suggests that OpenAI wasn't actually trying to replicate Johansson's voice, and that the similarity was coincidental. But then why did OpenAI CEO Sam Altman contact Johansson two days before a splashy demo featuring a soundalike voice to urge her to reconsider? A bit of a mystery remains.
Then there's the issue of OpenAI's reliability and safety.
As we reported earlier this month, OpenAI's disbanded Superalignment team, charged with developing ways to manage and control “superintelligent” AI systems, was promised 20% of the company's computing resources but actually received only a small fraction (and rarely). This (among other reasons) led to the resignations of the team's co-leader Jan Leike and OpenAI's former chief scientist Ilya Sutskever.
Nearly a dozen safety experts have left OpenAI over the past year, several of them, including Reicke, publicly voicing concerns that the company is prioritizing commercial projects over safety and transparency efforts. In response to criticism, OpenAI created a new committee to oversee safety and security decisions related to the company's projects and operations. However, the committee members were selected from within the company, including Altman, rather than outside observers. This comes as OpenAI is reportedly considering abandoning its nonprofit structure in favor of a more traditional for-profit model.
Incidents like these make it harder to trust OpenAI, a company that is growing more powerful and influential every day (see: its deals with news publishers). Few companies are worthy of trust. But OpenAI's market-disrupting technology makes its violations even more troubling.
It's made worse by the fact that Altman himself is hardly the epitome of honesty.
After reports emerged about Open AI's aggressive tactics toward former employees — threatening them with losing their stock or blocking their sales if they didn't sign restrictive non-disclosure agreements — Altman apologized and claimed he was unaware of the policy, though Vox reports that he signed the founding documents that established it.
And if we believe former OpenAI board member Helen Toner, one of the original board members who tried to remove Altman from his position late last year, Altman concealed information, misrepresented what was going on at OpenAI, and in some cases outright lied to the board: Toner claims that the board learned of the ChatGPT release via Twitter, not from Altman, that Altman provided false information about OpenAI's formal security measures, and that Altman was unhappy with an academic paper Toner co-authored that was critical of OpenAI and tried to manipulate the board into ousting Toner from the board.
None of these are good signs.
Other notable AI-related news in the past few days include:
Voice cloning made easy: A new report from the Digital Hate Countermeasures Center finds that AI-powered voice cloning services make it incredibly easy to fake what politicians say. Google's AI Overviews struggles: AI Overviews, the AI-generated search results that Google began rolling out more broadly in Google Search earlier this month, has room for improvement. The company acknowledges this but claims it is making improvements quickly (we'll wait and see). Paul Graham on Altman: In a series of posts on X, Paul Graham, co-founder of startup accelerator Y Combinator, dismissed claims that Altman was pressured to step down as president of Y Combinator in 2019 over a potential conflict of interest (Y Combinator holds a small stake in OpenAI). xAI Raises $6B: Elon Musk's AI startup xAI has raised $6 billion in funding as it ramps up funding to aggressively compete with rivals like OpenAI, Microsoft, and Alphabet. Perplexity's New AI Features: With a new feature, Perplexity Pages, AI startup Perplexity aims to help users create reports, articles, and guides in a more visually appealing format, reports Ivan. Favorite numbers for AI models: Devin writes about the numbers that different AI models choose when told to give random answers. After all, models have favorite numbers, which reflect the data on which each model was trained. Mistral Releases Codestral: Mistral, a Microsoft-backed French AI startup with a $6B valuation, has released the first generative AI model for coding, called Codestral. However, it is not available for commercial use due to Mistral's very restrictive license. Chatbots and Privacy: Natasha writes about the European Union's ChatGPT Taskforce and how it offers an initial perspective on untangling privacy compliance for AI chatbots. ElevenLabs Sound Generator: Voice cloning startup ElevenLabs has introduced a new tool, first announced in February, that allows users to generate sound effects through prompts. AI Chip Interconnection: Tech giants including Microsoft, Google, and Intel (but not Arm, Nvidia, or AWS) have formed the UALink Promoter Group, an industry association to help develop next-generation AI chip components.
Source link