ISO/IEC 42005 Impact Assessment
Build audit-ready AI governance aligned with international standards
A strong audit framework is the foundation of accountable, fair, and transparent AI.
Reinforce responsible AI with recognised audit practices
Detect and address risks before external scrutiny
Demonstrate fairness, accountability, and compliance in practice
Embedding Accountability in AI Governance
ISO/IEC 42005:2023 provides guidance on how AI governance should be audited. Unlike ISO/IEC 42001, which sets requirements for an AI Management System (AIMS), 42005 defines the methods to evaluate and assure that system. While it is not a certifiable standard, it helps organisations prove that their AI governance is fair, defensible, and accountable. By applying 42005, organisations can strengthen trust, ensure readiness for ISO/IEC 42001 certification, and reinforce alignment with the EU AI Act.
Our Approach
Clarify
We define the scope, context, and purpose of the AI system. Stakeholders, intended outcomes, and foreseeable risks are identified.
Assess
We evaluate positive and negative impacts, including unintended consequences and misuse scenarios, across the AI lifecycle.
Document
We capture findings in a structured record that aligns with ISO 42005. Risks, mitigations, monitoring, and oversight are clearly presented.
Integrate
We embed the assessment into your governance processes and ensure it is updated when systems change or risks evolve.
The Result: Responsible AI Assurance
Fairness in focus
Highlight and mitigate risks of bias, discrimination, or rights impacts through structured audits.
Certification readiness
Strengthen preparation for ISO/IEC 42001 certification with clear evidence and audit practices.
Defensible accountability
Demonstrate responsible AI governance with auditable proof for regulators and stakeholders.
Frequently Asked Questions
-
It provides guidance for assessing the impacts of AI systems across their lifecycle, focusing on fairness, accountability, transparency, and societal alignment.
-
ISO/IEC 42001 defines requirements for an AI Management System at the organisational level, while ISO/IEC 42005 guides impact assessments of individual AI systems.
-
It should be used before deployment of an AI system and revisited whenever the system changes significantly, ensuring risks and impacts remain well managed.
-
The standard helps organisations evaluate ethical, societal, and operational impacts, including fairness, safety, transparency, accountability, and human-centred design.
-
A structured impact assessment report with findings and recommendations, providing assurance that AI systems align with responsible governance and regulatory expectations.

