Google has changed its terms of service to clarify that customers can deploy its generative AI tools to make “automated decisions” in “high-risk” areas such as healthcare, as long as a human is involved.
The company's latest Generated AI Prohibition Policy, announced Tuesday, says customers can use Google's generated AI to make “automated decisions” that could have a “material adverse impact on the rights of individuals.” There is sex. Provided there is human oversight in some capacity, customers can use Google's generated AI to make decisions about employment, housing, insurance, social care, and other “high-risk” areas.
In the context of AI, automated decision-making refers to decisions made by AI systems based on both factual and inferential data. The system may automatically decide whether to make a loan or screen job applicants, for example.
Google's previous draft terms implied an outright ban on high-risk automated decision-making involving the company's generated AI. But Google told TechCrunch that customers can always use the AI it generates for automated decision-making, even for high-risk applications, as long as it's supervised by humans.
“Requirements for human oversight for all high-risk domains have always been part of our policy,” a Google spokesperson said in an emailed request for comment. “[W]Reclassifying some items [in our terms] Some examples are shown more explicitly to make it easier for users. ”
Google's top AI competitors, OpenAI and Anthropic, have stricter rules governing the use of their AI in high-risk automated decision-making. For example, OpenAI prohibits the use of its services for automated decisions about credit, employment, housing, education, social scoring, or insurance. Anthropic allows AI to be used for automated decision-making in law, insurance, healthcare, and other high-risk fields, but only under the supervision of “qualified experts.” and customers must disclose that they are using AI for this purpose. the purpose.
AI that makes automated decisions that affect individuals has drawn increased scrutiny from regulators, who have expressed concerns that the technology could bias results. For example, research shows that AI used in decision-making, such as approving credit and mortgage applications, can perpetuate historical discrimination.
The nonprofit Human Rights Watch specifically calls for a ban on “social scoring” systems. The group claims the system could block people's access to social security support, violate their privacy and potentially profile them in an unfavorable way.
Under EU AI law, high-risk AI systems, such as those that make personal credit and employment decisions, are under the greatest scrutiny. Providers of these systems must register in databases, perform quality and risk controls, employ human supervisors, and report incidents to relevant authorities, among other requirements.
In the United States, the state of Colorado recently passed a law requiring AI developers to disclose information about “high-risk” AI systems and publish a statement summarizing the system's capabilities and limitations. Meanwhile, New York City prohibits employers from using automated tools to screen candidates for hiring decisions unless the tools have undergone a bias audit within the previous year.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.