ComplianceForge AI
How does it work?Risk categoriesPricingFAQKnowledge Base417Blog
Login
Knowledge BaseStandardsISO/IEC 23894 — AI System Risk Management
STANDARD

ISO/IEC 23894 — AI System Risk Management

ISO/IEC 23894:2023 provides guidance on managing risks specific to artificial intelligence throughout the entire AI system lifecycle, building upon the ISO 31000 framework.

Source document

What is this document?

ISO/IEC 23894:2023 is an international standard titled "Information technology — Artificial intelligence — Guidance on risk management" that provides guidance to organisations on managing risks associated with AI systems. It was published in February 2023 by ISO/IEC JTC 1/SC 42.

Unlike ISO/IEC 42001, which establishes a management system, ISO/IEC 23894 focuses specifically on the risk management process — identifying, assessing, and mitigating AI-specific risks throughout the entire system lifecycle.

Key points

Relationship with ISO 31000

ISO/IEC 23894 builds upon the established principles of ISO 31000:2018, the broader international standard for risk management. This approach ensures compatibility with existing organisational risk management practices while addressing AI-specific challenges.

The standard does not create an entirely new methodology but rather extends well-known risk management concepts with AI-specific considerations.

Key elements of the standard

1. AI system risk management principles

  • Integration with existing organisational risk management processes
  • Consideration of risks throughout the entire AI system lifecycle
  • Involvement of interested parties in the risk assessment process

2. Risk management framework

  • Leadership and organisational commitment
  • Framework design tailored to the organisational context
  • Framework implementation, evaluation, and improvement

3. Risk management process

  • Communication and consultation — Ongoing dialogue with interested parties
  • Establishing context — Defining internal and external parameters
  • Risk identification — Recognising AI-specific sources of risk
  • Risk analysis — Determining likelihood and impact
  • Risk evaluation — Prioritising risks for treatment
  • Risk treatment — Selecting and implementing mitigation measures
  • Monitoring and review — Continuous risk monitoring and measure effectiveness

AI-specific risks addressed by the standard

The standard identifies risks specific to AI systems:

  • Bias and discrimination — Unintentional bias in training data or algorithms
  • Lack of explainability — Inability to explain how an AI system reaches decisions
  • Robustness and reliability — System behaviour in unforeseen situations
  • Data privacy — Risks associated with processing personal data
  • Security — AI system vulnerabilities to attacks (adversarial attacks)
  • System autonomy — Risks associated with the level of AI system autonomy
  • Data dependency — Quality and representativeness of training data

AI system lifecycle

The standard emphasises that risks must be considered at all stages:

  1. Planning and design
  2. Data collection and preparation
  3. Model development and training
  4. Validation and testing
  5. Deployment and launch
  6. Operational use and monitoring
  7. Decommissioning

How does it apply to organisations?

Relevance to the EU AI Act

ISO/IEC 23894 directly supports compliance with Article 9 of the EU AI Act, which mandates a risk management system for high-risk AI systems:

  • Identification of known and foreseeable risks to health, safety, or fundamental rights
  • Assessment of risks that may arise when used in accordance with the intended purpose
  • Assessment of risks arising from reasonably foreseeable misuse
  • Mitigation and control of identified risks

Difference between ISO 23894 and ISO 42001

AspectISO/IEC 42001ISO/IEC 23894
FocusAI management systemRisk management process
TypeRequirements (certifiable)Guidance (not certifiable)
ScopeBroader — entire management systemNarrower — specifically risk management
ApproachOrganisational frameworkPractical process guidance

Practical implementation steps

  1. Establish context — Define the scope of application and risk criteria for your AI systems
  2. Identify risks — Use the standard's list of AI-specific risks as a starting point
  3. Analyse risks — Assess the likelihood and impact of each identified risk
  4. Define measures — Select appropriate mitigation measures for each risk
  5. Implement — Execute measures and integrate them into operational processes
  6. Monitor — Continuously monitor risks and measure effectiveness
  7. Document — Maintain records of the entire risk management process

Relevant EU AI Act articles

ArticleConnection to ISO 23894
Art. 9Risk management system -> Entire standard
Art. 9(2)Risk identification and analysis -> Identification process
Art. 9(4)Risk management measures -> Risk treatment
Art. 9(7)System testing -> Validation and evaluation
Art. 72Post-market monitoring -> Continuous risk monitoring

Source document

Official standard page: ISO/IEC 23894:2023 — AI — Guidance on risk management

Need compliance documentation?

Generate AI Inventory, Risk Assessment and other documents automatically — tailored to your system.

Register for freeSee example

Quick compliance check

Find out in 5 min if your AI system is high-risk and what you need to do.

Start questionnaire