The White House announced that several major AI vendors, including OpenAI and Microsoft, have committed to taking steps to combat non-consensual deepfakes and child sexual abuse material.
Adobe, Cohere, Microsoft, OpenAI, and data provider Common Crawl said they will “responsibly” source the datasets they create and use to train AI and protect against image-based sexual abuse. These organizations, except for Common Crawl, also said they will build “feedback loops” and strategies into their development process to protect AI from generating sexual abuse imagery. Adobe, Microsoft, and OpenAI (but not Cohere) also said they will commit to removing nude images from AI training datasets “where appropriate and appropriate to the purpose of the model.”
It's worth noting that the effort is voluntary — many AI vendors have chosen not to participate (Anthropic, Midjourney, etc.) — and OpenAI's efforts are especially questionable given that CEO Sam Altman said in May that the company would explore ways to “responsibly” generate AI porn.
Still, the White House touted it as a victory in a broader effort to identify and mitigate the harms of deepfake nudes.