A recently introduced Senate bill would, if passed, direct the National Institute of Standards and Technology (NIST) to develop standards for third-party audits of AI.
It would also establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems, and require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.
"AI is moving faster than any of us thought it would two years ago," said Sen. Hickenlooper, who serves as the chair of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security. "But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it's too late. Otherwise, AI could bring more harm than good to our lives."
The bill defines AI along the lines of the National Artificial Intelligence Initiative Act of 2020, which said AI means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments, using machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
This represents but the latest in a series of actions to encourage regulation and oversight of artificial intelligence, not least of which was the