The good news: if you already manage projects with methodological rigour, you are closer to compliance than you think. The bad news: ignoring it has real consequences — penalties of up to 3% of global annual turnover.
The Four Risk Levels You Need to Know
Unacceptable risk (banned): Social scoring, subliminal manipulation, real-time biometrics in public spaces. If your project includes any of these, stop. No grey areas.
High risk (strict requirements): AI in critical infrastructure, education, employment, essential services, justice. Requires conformity assessment, technical documentation, human oversight and EU database registration.
Limited risk (transparency obligations): Chatbots, content-generating systems. Must inform users they are interacting with AI.
Minimal risk: Most enterprise AI tools — productivity tools, data analysis, process automation. No additional specific obligations.
What the EU AI Act Means for Your Specific Project
For most PMOs, AI project management systems, predictive dashboards and automation agents fall under minimal or limited risk. No certification required, but basic documentation and transparency with users is expected.
As a PM, your practical responsibility comes down to four questions you must be able to answer for any AI system in your project:
1. What type of AI system is it? Classify the risk. If you cannot classify it, that is the first problem to solve.
2. Is there a human who can supervise and override the system's decisions? Human oversight is not optional for high-risk systems.
3. Does technical documentation exist for the system? Training data, capabilities, known limitations. If the provider cannot supply it, that is a warning signal.
4. Do users know they are interacting with AI? Transparency is not just a legal requirement — it is the foundation of trust.
EU AI Act checklist for your next project
- I have classified the risk level of each AI system in the project
- Human oversight exists for high-impact decisions
- I have technical documentation from the AI system provider
- End users know they are interacting with AI
- There is a process for reporting AI-related incidents
- The system has the ability to explain its decisions (explainability)
Unsure about your project's compliance status?
In a diagnostic session we review your PMO's EU AI Act compliance and identify priority actions to take.
Request free session