The Nationwide Institute of Requirements and Know-how is looking for feedback on growing an Synthetic Intelligence Threat Administration Framework (AI RMF) that might enhance organizations’ potential to include trustworthiness into the design, growth and use of AI programs.
“The Framework goals to foster the event of revolutionary approaches to deal with traits of trustworthiness together with accuracy, explainability and interpretability, reliability, privateness, robustness, security, safety (resilience), and mitigation of unintended and/or dangerous bias, in addition to of dangerous makes use of,” NIST wrote in a July 28 request for data revealed within the Federal Register.
NIST needs enter on how the framework ought to tackle challenges in AI danger administration, together with identification, evaluation, prioritization, response and communication of AI dangers; how organizations at the moment assess and handle AI danger, together with bias and dangerous outcomes; and the way AI may be developed in order that it lessens the potential damaging influence on people and society, the RFI stated.
Ideas on what widespread definitions and characterizations for the features of trustworthiness needs to be submitted in addition to finest practices which may align with an AI danger framework.
NIST plans to develop its AI-RMF with the identical practices it used for the extensively embraced 2014 Cybersecurity Framework and the 2020 Privateness Framework.
Responses are due Aug. 19. Learn the total RFI right here.