Two of the biggest forces in two deeply intertwined tech ecosystems, large incumbents and startups, have stopped counting their money and are telling governments to do what they want and for their economic interests. They made a joint appeal to stop even considering regulations that might affect them. That's called innovation.
“Our two companies may not agree on everything, but this is not about our differences,” the group, which has vastly different perspectives and interests, wrote. Masu. a16z founding partners Marc Andreessen and Ben Horowitz, Microsoft CEO Satya Nadella and President and Chief Legal Officer Brad. Smith. It's a truly intersectional collective that represents both big business and big money.
But what they're probably looking for are small children. That is, any company that could be affected by SB 1047, the latest attempt at regulatory overreach.
Imagine being accused of improperly disclosing an open model. Anijny Mida, general partner at a16z, said this is a “regressive tax” on startups, and that unlike Mida and his poor colleagues, big tech companies can afford the legal fees needed to comply. He said it was a “blatant attempt to seize regulation.”
Except that it was all disinformation spread by Andreessen Horowitz and other money-making entities that may actually have been influenced as backers of multi-billion dollar companies. In fact, the proposed law specifically protected small models and start-ups, so small models and start-ups would have been only marginally affected.
It's strange that the very intentional carving out of “Little Tech” that Horowitz and Andreessen routinely defended was distorted and trivialized by the lobbying they and others did against SB 1047. That's true. (Said the bill's author, California Sen. Scott Wiener.) I explained this recently on Disrupt. )
The bill had its share of problems, but opponents were unable to meaningfully substantiate their claims that it vastly exaggerated the cost of compliance and would discourage or burden startups.
This is despite Andreessen and Horowitz's position that Andreessen and Horowitz are closely aligned – the chances of big tech winning (as in SB 1047). It's part of a well-established strategy to operate at the state level while seeking federal solutions that we know are winnable. Or it is stalled by partisan squabbles over technical issues or Congressional incompetence.
This joint statement on “policy opportunities'' is the second half of the play. After torpedoing SB 1047, they could say they did it solely to support federal policy. That's even though we're still waiting on federal privacy legislation, which tech companies have been pushing for a decade while fighting state bills.
And what policies do they support? “Responsible market-based mixed approaches” In other words, keep your hands off our money, Uncle Sam.
Regulation requires a “science- and standards-based approach that recognizes a regulatory framework that focuses on the application and misuse of technology” and “focuses on the risks of AI being misused by bad actors” There is. This means that there should be no active regulation, but rather reactive penalties if unregulated products are used by criminals for criminal purposes. This approach worked well throughout the FTX situation, so I can see why they support it.
“Regulation should be implemented only when the benefits outweigh the costs.'' It would take thousands of words to unpack all the fun of this idea expressed in this context. But essentially what they're proposing is to put foxes on the chicken coop planning committee.
Regulators “need to give developers and startups the flexibility to choose the AI models they use when building their solutions, and avoid tilting the playing field in favor of certain platforms.” ” This means that there is some plan to request permission to use one model or another. This is a strawman because it isn't.
There's a big one here that needs to be quoted in its entirety.
Right to Learn: Copyright law is intended to promote the advancement of science and useful arts by extending protection to publishers and authors and encouraging them to make new works and knowledge available to the public. However, this does not come at the expense of your right to learn from it. These works. Copyright law should not be used to imply that machines should be prevented from using the data that is the basis of AI to learn in the same way as humans. Knowledge and unprotected facts must remain free and accessible, regardless of whether they are included in protected subject matter.
To be clear, the clear argument here is that software run by multi-billion dollar companies has the “right” to access all your data, because it is “just like humans.” This means that we should be able to learn from the data.
First of all, no. These systems are different from humans. They generate data that mimics human output with training data. These are complex statistical projection software with natural language interfaces. As with Excel, there are no “rights” to documents or facts.
Second, because “facts” (which means “intellectual property”) are the only things these systems care about, and certain fact-gathering cabals will stop them from doing so. The idea that we are at work is a constructed narrative that we have seen before. In its public response to accusations of systematic content theft, Perplexity invoked the “facts belong to everyone” argument, with CEO Aravind Srinivas acting as if he didn't know trivia such as distance. He repeated that mistake to me on stage at Disrupt, as if he were being sued for doing so. From the Earth to the Moon.
This is not the place to embark on a complete explanation of this particular strawman argument, but the fact is that it is certainly a free agent, but the way it is created (e.g. through independent reporting or scientific research) Let me quickly point out that there are real costs involved. . That's why copyright and patent systems exist. It exists to encourage the creation of intellectual property by allowing real value to be assigned to it, rather than preventing it from being widely shared and used.
Copyright law is far from perfect and can probably be abused as much as it is used. But it's not being used to suggest that we should prevent machines from using our data, but rather to prevent bad actors from circumventing the system of value we've built around intellectual property. It is applied to make it.
That's a very obvious request. Ensure that the systems we own, run, and profit from are free to use the valuable work of others without charge. To be fair, that part is “just like humans.” Because humans design, direct, and deploy these systems, and humans don't want to pay for things they don't have to pay for. I don't want regulations to change that.
This small policy document contains many other recommendations, and the version sent directly to legislators and regulators through official lobbying channels will no doubt contain more detailed information. .
The idea of ”funding digital literacy programs that use AI tools to create information and help people understand how to access it” is definitely a good, if somewhat selfish, idea. . good! Of course, the authors have invested heavily in these tools. Supports “Open Data Commons – accessible pools of data managed in the public interest.” wonderful! “Please examine our procurement practices so more startups can sell technology to governments.” Awesome!
But these more general, positive recommendations are the kinds of things we see every year from industry to invest in public resources and speed up government processes. These tasteful but unimportant suggestions are merely vehicles for the more important suggestions outlined above.
Ben Horowitz, Brad Smith, Marc Andreessen, and Satya Nadella want the government to roll back regulations on this lucrative new development and let industry decide which regulations are worth the trade-off, more or less. They want to invalidate copyright in a general way. Please forgive the illegal or unethical practices that many suspect have enabled the rapid proliferation of AI. These are policies that matter to children, whether or not they become digitally literate.