In the generative AI boom, data is the new oil. So why not sell the products yourself?
From big tech companies to startups, AI manufacturers are licensing e-books, images, video, audio, and more from data brokers, all for better (and more legally defensible) AI-powered products. We aim to train. Shutterstock has deals with Meta, Google, Amazon, and Apple to provide millions of images for model training, while OpenAI has contracts with several news organizations to train models on news archives. has a contract with.
In many cases, the individual creators and owners of that data never see a single penny of cash exchanged or exchanged. A startup called Vana wants to change that.
Anna Kazlauskas and Art Abal co-founded Vana in 2021 after meeting in an MIT Media Lab class focused on building technology for emerging markets. Before she joined Vana, Kazlauskas studied computer science and economics at MIT, where she eventually dropped out to start a fintech. She is Iambiq, a Y Combinator automation startup. Mr. Abal is a corporate lawyer by training and education and was an associate at Boston-based consulting firm Cadmus Group before serving as head of data impact sourcing at Appen, a data annotation company.
With Vana, Kazlauskas and Abar set out to build a platform that would allow users to “pool” data such as chats, voice recordings, and photos into datasets that could be used to train generative AI models. We also fine-tune our publishing models based on data to create more personalized experiences, like daily motivational voicemails based on your health goals or art-generating apps that understand your style preferences. I am thinking of doing so.
“Vana's infrastructure effectively creates a treasure trove of user-owned data,” Kazlauskas told TechCrunch. “This is achieved by allowing users to aggregate personal data in a non-custodial manner. Vana allows users to own their AI models and use the data across their AI applications. It will be.”
Here's how Vana markets its platform and APIs to developers.
The Vana API connects your users' cross-platform personal data, allowing you to personalize your applications. Apps have instant access to users' personalized AI models and underlying data, simplifying onboarding and eliminating compute cost concerns. We believe that users should be able to bring their personal data from the walled gardens of Instagram, Facebook, Google, etc. into the application. Create amazingly personalized experiences from the first time a user interacts with your consumer AI application.
Creating an account with Vana is very easy. After confirming your email, you can attach data to your digital avatar (selfie, description of yourself, voice recording, etc.) and explore apps built using Vana's platform and data sets. App choices range from ChatGPT-style chatbots and interactive storybooks to Hinge profile generators.
In this era of heightened data privacy awareness and increased ransomware attacks, you may wonder why someone would provide their personal information to an anonymous startup, much less a venture-backed one. (Vana has raised his $20 million so far from Paradigm, Polychain Capital, and other backers.) Profit-seeking companies can misuse the monetizable data they obtain. Can we really trust that we won't mishandle it?
In response to that question, Kazlauskas emphasized that the point of Vana is for users to “take back control over their data,” giving Vana users the ability to self-host their data rather than storing it on Vana's servers. , pointed out that there are options to control how data is stored. Data is shared with apps and developers. She also notes that Vana makes money by charging users a monthly subscription (starting at $3.99) and charging developers “data transaction” fees (such as transferring datasets to train AI models). They argued that the company no longer has any incentive to exploit users. A treasure trove of personal data they bring with them.
“We want to create a model that is owned and controlled by all users contributing data, and we want to allow users to bring their data and models into any application,” Kazlauskas says. he said.
Currently, Vana doesn't (or claims to) sell your data to companies to train generative AI models, but if you want to, you can start by posting on Reddit. I would like to allow users to do this themselves.
This month, Vana launched what it calls the Reddit Data DAO (Digital Autonomous Organization). It's a program that pools multiple users' Reddit data (including karma and posting history) and allows users to decide together how that combined data will be used. By joining a Reddit account, submitting a request for data to Reddit, and uploading that data to his DAO, the user will be able to generate data that is combined for a shared benefit. Earn the right to vote with other members. .
We crunched the numbers and r/datadao has become the largest data DAO in history. Phase 1 involved 141,000 reddit users and 21,000 complete data uploads.
— r/datadao (@rdatadao) April 11, 2024
It's a response of sorts to Reddit's recent moves to commercialize data on its platform.
Reddit previously did not restrict access to posts or communities for the purpose of generative AI training. However, at the end of last year, just before the IPO, the company changed its mind. Since the policy change, Reddit has collected more than $203 million in licensing fees from companies such as Google.
“The general idea is [with the DAO is] “It's about freeing user data from the major platforms that hoard it and try to monetize it,” Kazlauskas said. “This is a first of its kind and is part of our commitment to enabling people to pool their data into user-owned datasets to train AI models.”
Understandably, Reddit, which doesn't work with Vana in an official capacity, isn't happy about the DAO.
Reddit has banned a Vana subreddit dedicated to discussion of DAOs. A Reddit spokesperson also accused Vana of “abusing” its data export system, which is designed to comply with data privacy regulations such as GDPR and the California Consumer Privacy Act.
“Our data arrangements allow us to put guardrails in place for such parties, even if the information is public,” a spokesperson told TechCrunch. “Reddit does not share non-public personal data with commercial companies, and in accordance with applicable law, we receive non-public personal data from us when Reddit users request the export of their data from us. Direct partnerships between Reddit and vetted organizations with clear terms and accountability are important, and these partnerships and agreements prevent the misuse and abuse of people's data.”
But is there any real reason for Reddit to be concerned?
Kazlauskas expects the DAO to grow to the point where it influences how much Reddit can charge customers for their data. If that ever happens, it's a long way off. DAO has just over 141,000 members, a small fraction of Reddit's 73 million user base. And some of those members may be bots or duplicate accounts.
Then there's also the question of how to fairly distribute the payments that DAOs may receive from data buyers.
Currently, the DAO awards users with “tokens,” or cryptocurrencies, based on their Reddit karma. However, karma may not be the best measure of quality contributions to the dataset, especially in the small Reddit community where there are few opportunities to earn karma.
Kazlauskas believes that DAO members could choose to share cross-platform and demographic data, which could further increase the value of the DAO and drive sign-ups. But this also requires users to trust Vana even more to handle their sensitive data responsibly.
Personally, I don't think Vana's DAO will reach critical mass. There are too many obstacles in your way. But I don't think this is the last grassroots attempt to assert control over the data used to train generative AI models.
Startups like Spawning are working on ways to impose rules governing how creators use their data for training, and vendors like Getty Images, Shutterstock, and Adobe continue to experiment with reward systems. Masu. But no one has cracked the code yet. Can it even crack? That's certainly a tall order given the cutthroat nature of the generative AI industry. But perhaps someone will find a way, or policymakers will force it.