There's no need to worry about the recent reported breach of OpenAI's systems, in which secret ChatGPT conversations were stolen. The hack itself, while troubling, appears to have been cosmetic. But it's a reminder that AI companies have quickly become some of the most attractive targets for hackers.
The New York Times reported more about the hack after former OpenAI employee Leopold Aschenbrenner alluded to it in a recent podcast. He called it a “major security incident,” but an anonymous company source told the Times that the hacker only had access to employee discussion forums. (I've reached out to OpenAI for clarification and comment.)
Security breaches are never to be taken lightly, and eavesdropping on OpenAI's internal development conversations would certainly be valuable, but it's not easy for a hacker to gain access to internal systems, work-in-progress models, secret roadmaps, and so on.
But either way, it's something that should frighten us, and not necessarily because of the threat of China or other adversaries overtaking us in the AI arms race. The simple fact is that these AI companies have become gatekeepers to vast amounts of highly valuable data.
We'll discuss three types of data that OpenAI, and to a lesser extent other AI companies, can create or have access to: high-quality training data, large amounts of user interactions, and customer data.
It's unclear what exactly the training data the companies have is because they're very secretive about their data accumulation, but it would be a mistake to think that it's just a big pile of scraped web data. Sure, companies use web scrapers and datasets like Pile, but shaping that raw data into something that can be used to train a model like GPT-4o is a monumental task. It requires countless human hours and can only be partially automated.
Some machine learning engineers speculate that of all the factors involved in creating a large-scale language model (or perhaps any Transformer-based system), the most important is the quality of the dataset. That's why a model trained on Twitter or Reddit is never going to be as eloquent as one trained on every published work of the last century. (And perhaps that's why OpenAI reportedly used sources of questionable legality, such as copyrighted books, for its training data, a practice it claims to have abandoned.)
This means that the training datasets that OpenAI has built are extremely valuable to competitors, from other companies to adversaries to U.S. regulators. The FTC and courts will likely want to know exactly what data was used, and whether OpenAI was honest about it.
But perhaps even more valuable is OpenAI's vast user data, which amounts to billions of conversations with ChatGPT across hundreds of thousands of topics. Just as search data was once key to understanding the collective mind of the web, ChatGPT has its finger on the pulse of a population that, while maybe not as broad as the entire Google user population, provides much deeper information. (In case you didn't know, your conversations are used as training data unless you opt out.)
For Google, searches for “air conditioner” are up, indicating that the market is a bit hotter. But those users aren't taking the time to discuss what they want, how much they can afford, what their home looks like, which brands they want to avoid, etc. This is valuable, because Google itself is trying to nudge users towards providing this very information by introducing AI-driven interactions instead of search.
Think about how many conversations people have had with ChatGPT, and how useful that information is not only for AI developers, but also for marketing teams, consultants, and analysts. It's a gold mine.
The final data category is perhaps the most valuable on the open market: how customers actually use AI and the data they themselves input into the models.
Hundreds of large enterprises and countless smaller businesses alike use tools like OpenAI and Anthropic's APIs for a wide variety of tasks, and for a language model to be useful to them, they typically need to fine-tune it or have access to their own internal databases.
This can range from mundane things like old budgets or personnel records (e.g., to make them easier to search) to valuable things like the code for unreleased software. What they do with the AI capabilities (and whether they are actually useful) is their business, but the simple fact is that, like any SaaS product, AI providers have privileged access.
These are trade secrets, and AI companies have suddenly become front and center in many of them. This new aspect of the industry carries special risks, as AI processes are not yet standardized or fully understood.
Like other SaaS providers, AI companies excel at providing industry-standard levels of security, privacy, on-premise options, and generally delivering their services responsibly. There's no doubt that OpenAI's Fortune 500 customers' private databases and API calls are locked down very tightly, and they're just as likely, if not more, aware of the risks involved with working with sensitive data in an AI context. (OpenAI's choice not to report this attack is not inspiring for companies that desperately need it.)
But good security measures don't change the value of what you're protecting, nor the fact that bad actors and various adversaries are constantly clawing at your door, trying to get in. Security isn't just about choosing the right settings or keeping your software up to date; the basics are important, of course. It's a never-ending game of cat and mouse, ironically made even more so by AI itself, as agents and attack automation tools probe every inch of these companies' attack surfaces.
There's no need to panic. Companies with access to large amounts of personal information and commercially valuable data have faced and dealt with similar risks for years. But AI companies are newer, younger, and potentially more attractive targets than your run-of-the-mill misconfigured enterprise server or irresponsible data broker. Even a hack like the one reported above, with no serious data exposure as far as we know, should worry anyone who does business with AI companies. They're painting a target on their backs. Don't be surprised if someone, or all, launches an attack.