The EU AI Act is not coming. It is here. It entered into force on 1 August 2024, with a phased implementation timeline that has already passed its first major milestones. Organisations deploying AI systems that interact with EU residents — or that are established in the EU — are already subject to its requirements.
Most of them have no idea what those requirements look like in practice.
What the Act actually requires
The EU AI Act operates on a risk-based classification system. Most public conversation focuses on the high-risk category, and for good reason — the obligations for high-risk AI systems are substantial.
High-risk systems include AI used in employment decisions, credit scoring, biometric identification, critical infrastructure management, law enforcement, and access to essential services. If your organisation uses AI in any of these areas, or deploys AI that a third party uses in these areas, you are in scope.
The obligations for high-risk AI systems include:
Technical documentation. Providers must maintain comprehensive documentation of the system — its purpose, the data it was trained on, its performance metrics, its known limitations, and the testing it has undergone. This documentation must be available to regulatory authorities on request.
Logging and audit trail. High-risk AI systems must automatically log events sufficient to enable post-hoc reconstruction of the system's operation. This is not optional and it is not satisfied by generic application logs. The logs must capture inputs, outputs, and relevant operational parameters.
Human oversight. High-risk systems must be designed to enable meaningful human oversight — including the ability to override, interrupt, or shut down the system. Oversight must be documented.
Accuracy, robustness, and cybersecurity. Systems must achieve appropriate levels of accuracy for their intended purpose, be robust against foreseeable errors, and be protected against adversarial manipulation.
Conformity assessment. Before deployment, certain high-risk systems require conformity assessment — either self-assessment against harmonised standards or third-party assessment.
The gap most organisations are sitting in
The obligation that most organisations are furthest from meeting is the logging and audit trail requirement. And the reason is structural: most AI deployments were not designed with this in mind.
When an organisation deploys a large language model — whether via API, through a third-party product, or by running a self-hosted model — the default state is that the model receives inputs and produces outputs. What the model considered, how it weighted different factors, why it produced a particular output rather than another — none of this is logged unless you deliberately architect for it.
This is not a criticism of AI providers. It is a description of how these systems work. The EU AI Act does not care about the default state. It cares about what you can demonstrate.
If your AI system makes a decision that adversely affects someone — a credit application rejected, a job candidate filtered out, a benefits claim denied — and a regulatory authority asks you to reconstruct how that decision was reached, your answer cannot be "the model decided." That is not an audit trail. It is an absence of one.
What the NIST AI RMF adds
While the EU AI Act is the binding legal instrument for EU-related deployments, the NIST AI Risk Management Framework provides the most operationally useful structure for actually building AI governance. The two are complementary.
The NIST AI RMF organises AI risk management across four functions: Govern, Map, Measure, and Manage. For organisations trying to meet EU AI Act obligations, the framework provides practical guidance on:
- How to categorise AI systems by risk profile
- What documentation to maintain and how to structure it
- How to implement and test human oversight mechanisms
- How to measure and monitor AI system performance in production
- How to establish accountability structures for AI decisions
The key insight from the NIST framework is that AI governance is not a one-time assessment. It is an ongoing programme. AI systems drift. Their training data becomes stale. Their performance in production diverges from their performance in testing. Model governance requires continuous monitoring — the same way a security programme requires continuous monitoring.
The six layers that need GRC coverage
Looking at AI security holistically, there are six layers where governance and compliance obligations exist — each corresponding to a distinct set of risks:
Identity and access. Who is permitted to interact with your AI systems, models, and data? Are access controls enforced? Are agent access rights scoped appropriately?
Data protection. What data is being sent to AI models? Does it include PII, financial information, or sensitive business data? Is it masked, tokenised, or encrypted before it reaches a model?
Prompt and input security. Are inputs being filtered for injection attacks and jailbreak attempts? Is there a policy enforcement layer before inputs reach the model?
Output validation. Are AI outputs reviewed before they are acted upon? Is there a fact-verification step? How are hallucinated or non-compliant outputs detected?
Governance and compliance. Are audit records maintained? Are risks categorised and tracked? Are the relevant frameworks — EU AI Act, NIST AI RMF, ISO 42001 — being addressed?
Monitoring and observability. Is AI system behaviour being tracked in production? Are usage irregularities detected? Is there an alerting mechanism for model drift?
Most organisations deploying AI have addressed none of these layers systematically. Many have addressed one or two. Almost none have documented their approach in a way that would satisfy an EU AI Act conformity assessment.
What this means for organisations deploying AI now
The phased timeline of the EU AI Act means that some obligations are already active and others come into force progressively. Prohibited practices have been banned since February 2025. High-risk system obligations are phasing in through 2026 and 2027.
The practical implication is that organisations deploying AI systems today should be making architectural decisions that account for the audit trail requirements — not retrofitting them later. Logging and observability are significantly harder to add to an existing AI deployment than to build in from the start.
Three things to do now, regardless of where you are in the timeline:
First, classify your AI deployments by risk. Which systems are high-risk under the Act? Which are general-purpose? Which fall into prohibited categories? You cannot design governance for an AI system you have not characterised.
Second, assess your current logging posture. For each AI system, can you reconstruct how a specific output was produced? Can you demonstrate that your human oversight mechanisms are operational? If not, what would be required to get there?
Third, document what you have. Even if your current posture is incomplete, documentation of your current state, known gaps, and remediation plan is substantially better than no documentation. Regulators looking at early enforcement will be assessing good-faith effort alongside technical compliance.
AI GRC is a new discipline but it is not a complicated one. It requires the same rigour as any other compliance programme: evidence gathering, systematic assessment, documented controls, and ongoing monitoring. The frameworks are there. The obligations are active. The gap is in implementation.
If you are assessing your organisation's AI governance posture, or building a compliance programme for an AI deployment, get in touch. This is exactly what we do.
Sam Sultan
Founder, Enact Cyber
Building evidence-first compliance methodology at Enact Cyber. Every report grounded in real findings — never pre-written, never fabricated.