Tuesday, January 21, 2025
HomeTechnologyUnderstanding Compliance With the NIST AI Risk Management Framework

Understanding Compliance With the NIST AI Risk Management Framework

Published on

SIEM as a Service

Follow Us on Google News

Incorporating artificial intelligence (AI) seems like a logical step for businesses looking to maximize efficiency and productivity. But the adverse effects of AI use, such as data security risk and misinformation, could bring more harm than good.

According to the World Economic Forum’s Global Risks Report 2024, AI-generated misinformation and disinformation are among the top global risks businesses face today.

To address the security risks posed by the increasing use of AI technologies in business processes, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1.0) in January 2023. 

Adhering to this framework not only puts your organization in strong position to avoid the dangers of AI-based exploits, it also adds an impressive type of compliance to your portfolio, instilling confidence in external stakeholders. Moreover, while NIST AI RMF is more of a guideline than a regulation, today there are several AI laws in the process of being enacted, so adhering to NIST’s framework helps CISOs to future-proof their AI compliance postures.

Let’s examine the four key pillars of the framework – govern, map, measure and manage – and see how you can incorporate them to better protect your organization from AI-related risks.

1.Establish AI Governance Structures

In the context of NIST AI RMF, governance is the process of establishing processes, procedures, and standards that guide responsible AI development, deployment, and use. Its main goal is to connect the technical aspect of AI system design and development with organizational goals, values, and principles.

Strong governance starts from the top, and NIST recommends establishing accountability structures with the appropriate teams responsible for AI risk management, under the framework’s “Govern” function. These teams will be responsible for putting in place structures, systems and processes, with the end goal of establishing a strong culture of responsible AI use throughout the organization.

Using automated tools is a great way to streamline the often tedious process of policy creation and governance. “We view it as our responsibility to help organizations maximize the benefits of AI while effectively mitigating the risks and ensuring compliance with best practices and good governance,” said Arik Solomon, CEO of Cypago, a SaaS platform that automates governance, risk management, and compliance (GRC) processes in line with the latest frameworks.

“These latest features ensure that Cypago supports the newest AI and cyber governance frameworks, enabling GRC and cybersecurity teams to automate GRC with the most up-to-date requirements.”

Rather than existing as a stand-alone component, governance should be incorporated into every other NIST AI RMF function, particularly those associated with assessment and compliance. This will foster a strong organizational risk culture and improve internal processes and standards.

2.Map And Categorize AI Systems

The framework’s “Map” function supports governance efforts while also providing a foundation for measuring and managing risk. It’s here that the risks associated with an AI system are put into context, which will ultimately determine the appropriateness or need for the given AI solution.

As Opice Blum data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilization.” 

But how do you actually put this mapping process into practice?

NIST recommends the following approach:

  • Clearly establish why you need or want to implement the AI system. What are the expectations? What are the prospective settings where the system will be deployed? You should also determine the organizational risk tolerance for operating the system.
  • Map all of the risks and benefits associated with using the system. Here is where you should also determine your risk tolerance, not only with monetary costs but also those stemming from AI errors or malfunctions.
  • Analyze the likelihood and magnitude of the impact the AI system will have on the organization, including employees, customers, and society as a whole.

3.Measure AI Performance and Risk

The “Measure” function utilizes qualitative and quantitative techniques to analyze and monitor the AI-related risks identified in the “Map” function.

AI systems should be tested before deployment and frequently thereafter. But measuring risk with AI systems can be tricky. The technology is fairly new, so there are no standardized metrics yet. This might change in the near future, as developing these metrics is a high priority for many consulting firms. For example, Ernst & Young (EY) is developing an AI Confidence Index

“Our confidence index is founded on five criteria – privacy and security, bias and fairness, reliability, transparency and explainability, and the last is accountability,” noted Kapish Vanvaria, EY Americas Risk Market Leader. The other axis includes regulations and ethics. 

“Then you can have a heat map of the different processes you’re looking at and the functions in which they’re deployed,” he says. “And you can go through each one and apply a weighted scoring method to it.”

In the NIST framework’s priorities, there are three main components of an AI system that must be measured: trustworthiness, social impact, and how humans interact with the system. The measuring process will likely consist of extensive software testing, performance assessments and benchmarks, along with reporting and documentation of results.

4.Adopt Risk Management Strategies

The “Manage” function puts everything together by allocating the necessary resources to regularly attend to uncovered risks during the previous stages. The means to do so are typically determined with governance efforts, and can be in the form of human intervention, automated tools for real-time detection and response, or other strategies.

To manage AI risks effectively, it’s crucial to maintain ongoing visibility across all organizational tools, applications, and models. AI should not be handled as a separate entity but integrated seamlessly into a comprehensive risk management framework.

Ayesha Gulley, an AI policy expert from Holistic AI, urges businesses to adopt risk management strategies early, taking into account five factors: robustness, bias, privacy, explainability and efficacy. Holistic’s software platform includes modules for AI auditing and risk posture reporting.

“While AI risk management can be started at any point in the project development,” she said, “implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”

Evolve With AI

The NIST AI Framework is not designed to restrict the efficient use of AI technology. On the contrary, it aims to encourage adoption and innovation by providing clear guidelines and best practices for developing and using AI securely and responsibly.

Implementing the framework will not only help you reach compliance standards but also make your organization much more capable of maximizing the benefits of AI technologies without compromising on risk.

Latest articles

Multiple Azure DevOps Vulnerabilities Let Inject CRLF Queries & Rebind DNS

Researchers uncovered several significant vulnerabilities within Azure DevOps, specifically focusing on potential Server-Side Request...

Hackers Weaponize npm Packages To Steal Solana Private Keys Via Gmail

Socket’s threat research team has identified a series of malicious npm packages specifically designed...

Hackers Weaponize MSI Packages & PNG Files to Deliver Multi-stage Malware

Researchers have reported a series of sophisticated cyber attacks aimed at organizations in Chinese-speaking...

New IoT Botnet Launching Large-Scale DDoS attacks Hijacking IoT Devices

Large-scale DDoS attack commands sent from an IoT botnet's C&C server targeting Japan and...

API Security Webinar

Free Webinar - DevSecOps Hacks

By embedding security into your CI/CD workflows, you can shift left, streamline your DevSecOps processes, and release secure applications faster—all while saving time and resources.

In this webinar, join Phani Deepak Akella ( VP of Marketing ) and Karthik Krishnamoorthy (CTO), Indusface as they explores best practices for integrating application security into your CI/CD workflows using tools like Jenkins and Jira.

Discussion points

Automate security scans as part of the CI/CD pipeline.
Get real-time, actionable insights into vulnerabilities.
Prioritize and track fixes directly in Jira, enhancing collaboration.
Reduce risks and costs by addressing vulnerabilities pre-production.

More like this

What Is Public Cloud vs. Private Cloud? Pros and Cons Explained 

Are you trying to decide between public and private cloud solutions for your business?...

Practical Ways to Secure Your Business Network

Protecting your business network has never been more important. Cyberattacks are on the rise,...

How Learning Experience Platforms Are Transforming Training

Within today's fast-changing global society, effective training is vital for personal and professional success....