The National Institute of Standards and Technology (NIST) has announced its publication of an initial draft of the AI Risk Management Framework (AI RMF). The AI RMF outlines the risks in the design, development, use and evaluation of AI systems, and includes a companion practice guide with examples and practices that can assist in adopting the framework. NIST has asked for feedback to be submitted by April 29, 2022. NIST is planning to hold a public workshop on March 29-31, 2022 during which it will solicit industry input on the AI RMF. Additionally, NIST will produce a second draft for comment, as well as host a third workshop, before publishing AI RMF 1.0 in January 2023.
NIST has also updated its special publication on Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. In this updated report, NIST emphasized the importance of tackling biases in artificial intelligence beyond data sets and machine learning processes. The report recommends that AI developers and researchers also understand how biases arise in algorithms and data use, as well as the larger societal context in which AI systems are being used.
Reva Schwartz, one of the report’s authors and NIST principal investigator for AI bias, commented in a press release:
“Context is everything. AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”
While it is unclear whether NIST’s efforts will lead to federal legislation on artificial intelligence, the Federal Trade Commission (FTC) and state legislatures have been stepping up their efforts to study the impact of the use of artificial intelligence and the potential roles for policymakers.
In December 2021, the FTC issued a notice that it was “considering initiating a rulemaking under Section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” As previewed by its notice, there are a number of privacy, cybersecurity and AI issues that the FTC may seek to regulate. For example, in April 2021, the FTC published a blog post warning companies that bias in AI systems could result in “deception, discrimination—and an FTC enforcement action.” Additionally, in a series of resolutions passed in fall 2021, the FTC declared algorithmic and biometric bias a focus of enforcement in the upcoming years.
AI is also a focus at the state and local level as well. For example, the California Privacy Protection Agency is working on rules to regulate algorithms and other technologies through data. The rules would govern consumers rights related to opting out of automated decision-making and securing information about the logic behind such decisions, among other areas. Additionally, the Colorado General Assembly is currently considering a bill that would restrict insurers’ use of external consumer data, algorithms and predictive models that result in unfair discrimination. The Rhode Island General Assembly is considering a mirror version of the proposed Colorado legislation.
At the same time, local governments in New York City and Detroit have passed regulations to mitigate biases and discriminatory practices of algorithms. The New York City Council passed the nation’s first bill placing limits on the use of AI in the hiring process. The Detroit City Council approved an ordinance requiring more accountability and transparency in the city’s surveillance systems.
You can read more on the regulation of AI in both the U.S. and internationally here, as well as our insights on data ethics here.