The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that develops and tests technology for the U.S. government, businesses, and the broader public, announced Monday that a new NIST-led announced the launch of the NIST GenAI program. AI technology, such as AI that generates text and images.
NIST GenAI releases benchmarks to help create “content authenticity” detection (i.e. deepfake checking) systems and encourage the development of software that identifies sources of AI-generated false and misleading information. , explains NIST on the newly launched NIST GenAI website. In a press release.
“The NIST GenAI program issues a series of challenges aimed at assessing and measuring the capabilities and limitations of generative AI technologies,” the press release states. “These assessments are used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”
NIST GenAI's first project is a pilot study to build a system that can reliably tell the difference between human-generated and AI-generated media, starting with text. (Many services claim to detect deepfakes, but research and our own testing show that they are unstable at best, especially when it comes to text.) We are inviting teams from , industry, and research labs to submit one of the following “generators”: AI systems that generate content, or “discriminators” that are systems that try to identify AI-generated content.
The research generator needs to provide a topic and a set of documents to generate a summary, while the discriminator needs to detect whether the given summary was written by an AI. To ensure fairness, NIST GenAI provides the data necessary to train generators and discriminators. Systems trained on publicly available data, including but not limited to open models such as Meta's Llama 3, are not accepted.
Registration for the pilot will open on May 1st, with results expected to be published in February 2025.
The launch of NIST GenAI and its research focused on deepfakes comes amid rapid growth in deepfakes.
More than 900% more deepfakes were created this year compared to the same period last year, according to data from deepfake detection firm Clarity. Understandably, that's causing alarm. According to a recent YouGov poll, 85% of Americans say they are concerned about the spread of misleading deepfakes online.
The launch of NIST GenAI is part of NIST's response to President Joe Biden's executive order on AI, which sets rules requiring AI companies to be more transparent about how their models work, and the content generated by AI. Established a number of new standards, including labeling. .
This is also the first AI-related announcement from NIST since former OpenAI researcher Paul Cristiano was appointed to NIST's AI Safety Laboratory.
Cristiano was a controversial figure due to his “fatalist” views. He once predicted that there was a 50% chance that AI development would end. [humanity’s destruction]Critics, including scientists within NIST, fear that Cristiano will encourage the AI Safety Institute to focus on “fantasy scenarios” rather than the real and immediate risks posed by AI. are doing.
NIST says NIST GenAI will provide information on the activities of the AI Safety Institute.