The National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework (AI RMF 1.0) on January 26, 2023. On the same day that the NIST AI RMF 1.0 was released, the White House announced its commitment to collaborate and address responsible AI advancement under the U.S.-EU Trade and Technology Council (TTC) commitment. Both announcements will shine a bright light on the need for AI governance in the enterprise — now. The EU’s AI Act release is pending for 2023, which will also affect businesses. AI governance is not a nice-to-have anymore.
A Step In The Right Direction …
The core tenet of this first-of-its-kind standards framework is to mitigate harm to individuals, to an organization, or to an ecosystem from inappropriate use of AI while also reaping the benefits of the promise of this transformative technology. The framework proposes governance as a culture supported by mapping context and risk, measuring and analyzing risk, and managing risks across the AI lifecycle. The release is timely as AI regulations roll out, new AI technology such as generative AI gains momentum, and enterprises work to ensure that AI is deployed in a responsible and trusted manner. The framework can provide a comprehensive view and catalog of AI governance capabilities, especially in terms of what it could mean for enterprises. Forrester believes that concepts such as risk trade-offs, executive participation in AI testing, consideration of third-party AI vetting, and building upon existing risk frameworks are consistent with how we think that enterprises must approach AI governance.
(Image source: NIST)
… But Proceed With Caution
Chief data officers and heads of data science need to navigate this framework wisely to interpret and apply it to their AI governance efforts because it is still currently descriptive and not prescriptive. Why? Because:
- The conflicts of interest are evident. Cross-community collaboration brought expertise and special interests together, leading to contradictions in the framework. While some framework assertions are technically true, they may be disingenuous and inappropriately bring public arguments from areas such as the social media and advertising sectors front and center into a neutral guide. Read Forrester’s research on how organizations should design and systematically run risk assessments and make use of data clean rooms.
- Mapping and measurement are still challenging. The framework calls out challenges of opaque and black-box AI, even stating that measurement will be implausible. At the same time, it states that mapping and measurement is a critical competency. Enterprises may determine this as a gating factor for AI governance progress or innovation overall. Read Forrester’s research on explainable AI and AI fairness for techniques to establish a context-oriented measurement framework.
- The role of data governance is ambiguous. Data governance does not have an explicit reference in the NIST framework, and data stewards are missing from the list of roles. Yet Forrester’s research finds that when chief data officers (CDOs) and data science leaders champion AI governance, they actively build upon existing data governance practices. In addition, they actively evolve the roles and responsibilities for AI risk and data integrity. Future updates of the AI RMF will need to address the dependency between AI and data governance. Read Forrester’s research on data governance to evolve data governance for AI risk.
- The framework is undifferentiated from other governance approaches. The list of AI governance considerations is detailed. Much of the framework is still generic and like other governance frameworks, however. Governance programs have a difficult history in organizations, stymied by lack of adoption; limited funding and bureaucracy; slowness; and missing ROI. More work is needed to recognize challenges and barriers and provide prescriptive advice to succeed where past governance has failed. Read Forrester’s research on connected intelligence to modernize AI and data best practices.
- Adoption of these standards remains voluntary. The business case for AI governance is clear under regulations. In contrast, norms-based use cases related to areas such as free speech or offensive content are left open to interpretation without explicit consequences to drive AI governance strategy. CDOs will need to implement these standards into their own governance strategies. Read Forrester’s research on trusted data sharing to address norms-based use cases.
It takes considerable time and effort to ensure the responsible adoption, development, and deployment of artificial intelligence. The framework puts the AI governance conversation in the enterprise front and center. Feel free to schedule an inquiry to discuss how to apply the framework pragmatically to help future-proof your AI efforts.