The EU AI regulation takes effect. Do you have a management system for AI?
Using AI in your products or services? Then you need documented risk management, human oversight and transparency. AmpliFlow gives you the structure β with ISO 42001 as the framework.
Four risk levels β different requirements
The EU AI Act classifies AI systems by potential harm. Higher risk means stricter requirements. Most companies using AI in business-critical processes will fall under "high risk".
Social scoring, real-time biometrics in public spaces, manipulative AI. These systems are banned outright.
Biometrics, critical infrastructure, education, employment, law enforcement. Requires management systems, risk assessment and human oversight.
Chatbots and AI interacting with people. Users must know they are talking to AI.
Spam filters, AI in games and similar. No specific requirements, voluntary codes of conduct.
The AI management standard that gives you a head start
ISO/IEC 42001 is the international standard for AI Management Systems (AIMS). It gives you the framework to govern AI development and use responsibly β and demonstrates that you take the EU AI Act seriously.
AmpliFlow supports ISO 42001 with the same proven management system structure as for ISO 27001 and ISO 9001. Joakim at AmpliFlow participated in the ISO working group that developed the standard.
Risk management system
Identify and assess risks throughout the AI system lifecycle. Document measures and residual risks.
Data governance
Ensure training data is relevant, representative and free from bias. Document data provenance.
Transparency and information
Provide users with clear information about the AI system's capabilities, limitations and intended use.
Human oversight
Design systems so humans can monitor and intervene when needed. Document oversight processes.
Phased implementation
The regulation rolls out in stages. The high-risk requirements in Annex III β affecting the most companies β take effect from August 2026.
Ban on AI systems with unacceptable risk
Rules for General Purpose AI (GPAI) models
Requirements for high-risk AI in Annex III take effect
Requirements for high-risk AI in Annex II (product safety)
How to build your AI management system
Existing management system tools β the same structure you already know from ISO 27001 and ISO 9001.
Risk Assessment
Classify AI systems according to the regulation's risk levels. Document risk analyses with proven methodology and link them to actions.
Pages (wiki)
Collect AI policies, technical documentation and user instructions in AmpliFlow's wiki feature. Accessible to everyone who needs the information.
Process Management
Map the AI development lifecycle and AI-affected processes. Link AI tools to your process steps for full visibility.
Audit Management
Plan and conduct conformity assessments. Document audit findings and follow up on non-conformities systematically.
Goals & Monitoring
Set goals for AI performance and track results. Monitor that AI systems work as intended over time.
Legal Requirements Register
Add the EU AI Act to your manual legal requirements register. Set target dates and link requirements to processes and controls.
What you can expect
Concrete benefits from structured AI governance.
AI governance
Complete overview of your AI systems, risks and actions in a single management system.
risk assessments
Systematic risk classification according to EU AI Act risk levels with traceable documentation.
AI lifecycle
Documented development, deployment and monitoring throughout the entire lifecycle.
compliance
Scheduled actions to meet requirements before each milestone takes effect.
Questions about the EU AI Act
What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legislation for artificial intelligence. It entered into force on 1 August 2024 and applies to all organisations that develop, provide, or use AI systems within the EU. The regulation uses a risk-based approach where requirements depend on the AI system's risk level.
What counts as a high-risk AI system?
High-risk AI systems are defined in Annex III of the regulation and include biometric identification, AI in critical infrastructure, AI affecting education and employment, AI in law enforcement and migration, and AI used by judicial authorities. These systems must meet strict requirements for risk management, data governance, documentation, transparency, human oversight and cybersecurity.
What are the penalties for non-compliance?
The penalties are significant: up to EUR 35 million or 7% of global annual turnover for prohibited AI practices, up to EUR 15 million or 3% for other violations, and up to EUR 7.5 million or 1.5% for providing incorrect information to authorities. Whichever amount is higher applies.
How does ISO 42001 help with EU AI Act compliance?
ISO/IEC 42001 provides a framework for building an AI Management System (AIMS) that supports EU AI Act compliance. The standard covers risk assessment, governance, documentation and continual improvement β exactly what's needed to demonstrate responsible AI management.
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act applies to all organisations that provide or use AI systems whose output is used within the EU, regardless of where the organisation is based. If your AI system affects people within the EU, you are covered by the regulation.
How does AmpliFlow support EU AI Act work?
The management system's existing tools work for AI governance: risk analysis for classifying AI systems, pages (wiki) for policies and technical documentation, process management for mapping AI-affected processes, legal requirements register for tracking the EU AI Act, and action management for following up on adaptations.
Ready to prepare for the EU AI Act?
Book a demo and we'll show you how AmpliFlow can help you structure AI governance and build your management system with ISO 42001 as the foundation.