The New ISO standard for AI Impact Assessment: ISO/IEC 42005 Published in May 2025
The world of AI governance has just taken a major step forward. In May 2025, the International Organization for Standardization (ISO) and the IEC released ISO/IEC 42005:2025, a groundbreaking new standard that sets formal expectations for AI system impact assessments.
Where previous frameworks focused largely on theoretical risks and system controls, ISO/IEC 42005 addresses a more urgent question: what is AI doing to people in the real world?
This development builds on the foundation laid by ISO/IEC 42001:2023, which introduced the world’s first certifiable AI Management System (AIMS). Together, these two standards establish a new baseline for responsible, transparent, and human-centered AI governance.
What Makes ISO/IEC 42005 Unique?
ISO/IEC 42005 focuses on the social, ethical, and individual impacts of AI systems. It moves beyond traditional risk assessments to evaluate how AI technologies are affecting users, especially those who may be vulnerable, marginalized, or excluded.
For example, impact assessments may uncover that:
AI in Public Benefits Administration:
Algorithms determining access to unemployment support or housing benefits, affecting vulnerable individuals if criteria are opaque or flawed.
Facial Recognition in Public Spaces:
Deployed in transport hubs or by law enforcement, raising serious concerns around surveillance, consent, and false positives.
AI for School or University Admissions:
Systems ranking or filtering students based on historical data that may disadvantage applicants from underrepresented backgrounds.
These are not hypothetical problems. They are examples of how AI can unintentionally replicate or amplify real-world inequities. ISO/IEC 42005 provides a practical framework to identify, document, and address these harms.
Core Components of ISO/IEC 42005
Organizations using this standard are expected to:
Identify who may be affected by an AI system’s decisions or behavior
Analyze social and ethical consequences throughout the AI lifecycle
Involve internal and external stakeholders, including impacted communities
Document the assessment process and mitigation steps taken
Integrate findings into governance and system improvement processes
This standard is applicable across sectors; from education and healthcare to government and finance. It is designed for any organization that deploys or relies on AI technology.
How ISO/IEC 42005 Relates and Completes 42001:
ISO/IEC 42001, published in December 2023, focuses on the internal infrastructure needed to govern AI systems effectively. It sets out how to build a formal, certifiable AI Management System that aligns with organizational goals, ethical principles, and applicable regulations.
It includes:
Clear roles and responsibilities for AI governance
Structured risk management processes
Policies for data handling, monitoring, and review
Alignment with legal, ethical, and sector-specific norms
Continuous oversight and audit mechanisms
While ISO/IEC 42001 ensures you govern the system, ISO/IEC 42005 ensures you understand its impact on people.
Why These Standards Work Best Together:
By combining both standards, organizations can move from reactive compliance to proactive, strategic governance. They also prepare themselves for stricter upcoming laws such as the EU AI Act, OECD frameworks, and other national initiatives that demand ethical, explainable, and equitable AI.
How ART25 Consulting Can Support You:
From Stockholm to the world, ART25 Consulting specializes in AI governance, data protection, and digital risk assurance. We support clients in navigating regulatory change, designing accountable systems, and building internal capacity.
Our expert services include:
Whether you’re in early development or preparing for procurement, we help ensure your AI initiatives are safe, auditable, and built for trust. Schedule a first call here to find out how our services can help your particular use cases.