Skip to main content

AI Measure Requires Federal Agencies to Adopt Safety Standards

November 27, 2023

A new bipartisan bill seeks to limit the risks of artificial intelligence by requiring federal agencies to adopt safety standards around the emerging technology’s use.

Sens. Jerry Moran (R-Kan.) and Mark Warner (D-Va.) on Thursday proposed the legislation, which would direct the National Institute of Standards and Technology to provide guidance to agencies to deploy AI safely and securely.

As AI models rapidly developed, NIST in January released a comprehensive framework on mitigating potential harms, such as privacy violations and bias, for companies to follow voluntarily. Thursday’s legislation would require federal agencies to adopt that framework.

The guidelines “should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions,” Moran said in a statement.

Rep. Ted Lieu (D-Calif.) plans to introduce companion legislation in the House, according to Jenna Bushnell, a spokesperson for the lawmaker.

Thursday’s proposal, if signed into law, would strengthen the Biden administration’s AI regulatory goals. It comes days after President Joe Biden signed a sweeping executive order directing federal agencies to set rules on AI to ensure the technology is used safely and responsibly.

Under the order, AI companies developing the most powerful systems that could threaten national security would have to submit test results to the government before deploying the tools. NIST will have to coordinate with agencies to establish guidelines on red-teaming and assessing the safety of such models.

Though industry and civil society officials have widely praised the order, many say it’s still up to Congress to step in and regulate the technology. Lawmakers can codify elements of the order and prevent the possibility of future administrations reversing it, they say.

“There’s a lot of this executive order that’s focusing on NIST coming up with rules and regulations, and NIST today, I don’t think is equipped for this,” David Brumley, a Carnegie Mellon University professor who runs the cybersecurity company ForAllSecure, said, claiming a lack of enough resources.

Moran over the summer proposed including in the Senate’s national defense bill language similar to the measure he and Intelligence Committee Chairman Warner introduced Thursday. Technology companies including Okta, Workday, and AI startup Hugging Face have endorsed the legislation.

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” Warner said in a statement. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly.”