Categories: Technology

Understanding Compliance With the NIST AI Risk Management Framework

Incorporating artificial intelligence (AI) seems like a logical step for businesses looking to maximize efficiency and productivity. But the adverse effects of AI use, such as data security risk and misinformation, could bring more harm than good.

According to the World Economic Forum’s Global Risks Report 2024, AI-generated misinformation and disinformation are among the top global risks businesses face today.

To address the security risks posed by the increasing use of AI technologies in business processes, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1.0) in January 2023. 

Adhering to this framework not only puts your organization in strong position to avoid the dangers of AI-based exploits, it also adds an impressive type of compliance to your portfolio, instilling confidence in external stakeholders. Moreover, while NIST AI RMF is more of a guideline than a regulation, today there are several AI laws in the process of being enacted, so adhering to NIST’s framework helps CISOs to future-proof their AI compliance postures.

Let’s examine the four key pillars of the framework – govern, map, measure and manage – and see how you can incorporate them to better protect your organization from AI-related risks.

1.Establish AI Governance Structures

In the context of NIST AI RMF, governance is the process of establishing processes, procedures, and standards that guide responsible AI development, deployment, and use. Its main goal is to connect the technical aspect of AI system design and development with organizational goals, values, and principles.

Strong governance starts from the top, and NIST recommends establishing accountability structures with the appropriate teams responsible for AI risk management, under the framework’s “Govern” function. These teams will be responsible for putting in place structures, systems and processes, with the end goal of establishing a strong culture of responsible AI use throughout the organization.

Using automated tools is a great way to streamline the often tedious process of policy creation and governance. “We view it as our responsibility to help organizations maximize the benefits of AI while effectively mitigating the risks and ensuring compliance with best practices and good governance,” said Arik Solomon, CEO of Cypago, a SaaS platform that automates governance, risk management, and compliance (GRC) processes in line with the latest frameworks.

“These latest features ensure that Cypago supports the newest AI and cyber governance frameworks, enabling GRC and cybersecurity teams to automate GRC with the most up-to-date requirements.”

Rather than existing as a stand-alone component, governance should be incorporated into every other NIST AI RMF function, particularly those associated with assessment and compliance. This will foster a strong organizational risk culture and improve internal processes and standards.

2.Map And Categorize AI Systems

The framework’s “Map” function supports governance efforts while also providing a foundation for measuring and managing risk. It’s here that the risks associated with an AI system are put into context, which will ultimately determine the appropriateness or need for the given AI solution.

As Opice Blum data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilization.” 

But how do you actually put this mapping process into practice?

NIST recommends the following approach:

  • Clearly establish why you need or want to implement the AI system. What are the expectations? What are the prospective settings where the system will be deployed? You should also determine the organizational risk tolerance for operating the system.
  • Map all of the risks and benefits associated with using the system. Here is where you should also determine your risk tolerance, not only with monetary costs but also those stemming from AI errors or malfunctions.
  • Analyze the likelihood and magnitude of the impact the AI system will have on the organization, including employees, customers, and society as a whole.

3.Measure AI Performance and Risk

The “Measure” function utilizes qualitative and quantitative techniques to analyze and monitor the AI-related risks identified in the “Map” function.

AI systems should be tested before deployment and frequently thereafter. But measuring risk with AI systems can be tricky. The technology is fairly new, so there are no standardized metrics yet. This might change in the near future, as developing these metrics is a high priority for many consulting firms. For example, Ernst & Young (EY) is developing an AI Confidence Index

“Our confidence index is founded on five criteria – privacy and security, bias and fairness, reliability, transparency and explainability, and the last is accountability,” noted Kapish Vanvaria, EY Americas Risk Market Leader. The other axis includes regulations and ethics. 

“Then you can have a heat map of the different processes you’re looking at and the functions in which they’re deployed,” he says. “And you can go through each one and apply a weighted scoring method to it.”

In the NIST framework’s priorities, there are three main components of an AI system that must be measured: trustworthiness, social impact, and how humans interact with the system. The measuring process will likely consist of extensive software testing, performance assessments and benchmarks, along with reporting and documentation of results.

4.Adopt Risk Management Strategies

The “Manage” function puts everything together by allocating the necessary resources to regularly attend to uncovered risks during the previous stages. The means to do so are typically determined with governance efforts, and can be in the form of human intervention, automated tools for real-time detection and response, or other strategies.

To manage AI risks effectively, it’s crucial to maintain ongoing visibility across all organizational tools, applications, and models. AI should not be handled as a separate entity but integrated seamlessly into a comprehensive risk management framework.

Ayesha Gulley, an AI policy expert from Holistic AI, urges businesses to adopt risk management strategies early, taking into account five factors: robustness, bias, privacy, explainability and efficacy. Holistic’s software platform includes modules for AI auditing and risk posture reporting.

“While AI risk management can be started at any point in the project development,” she said, “implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”

Evolve With AI

The NIST AI Framework is not designed to restrict the efficient use of AI technology. On the contrary, it aims to encourage adoption and innovation by providing clear guidelines and best practices for developing and using AI securely and responsibly.

Implementing the framework will not only help you reach compliance standards but also make your organization much more capable of maximizing the benefits of AI technologies without compromising on risk.

Kayal

Recent Posts

Cisco ASA Devices Vulnerable to SSH Remote Command Injection Flaw

Cisco has issued a critical security advisory regarding a vulnerability in its Adaptive Security Appliance…

1 hour ago

Google Patches Multiple Chrome Security Vulnerabilities

Google has released several security patches for its Chrome browser, addressing critical vulnerabilities that malicious…

2 hours ago

Grayscale Investments Data Breach Exposes 693K User Records Reportedly Affected

Grayscale Investments, a prominent crypto asset manager, has reportedly suffered a data breach affecting 693,635…

22 hours ago

Threat Actors Allegedly Selling Database of 1,000 NHS Email Accounts

A database containing over 1,000 email accounts associated with the National Health Service (NHS) has…

22 hours ago

Mallox Ransomware Vulnerability Lets Victims Decrypt Files

Researchers from Avast have uncovered a vulnerability in the cryptographic schema of the Mallox ransomware,…

24 hours ago

Red Hat NetworkManager Flaw Allows Hackers to Gain Root Access

A recently discovered vulnerability in Red Hat's NetworkManager, CVE-2024-8260, has raised concerns in the cybersecurity…

1 day ago