The EU AI Act Deadline Is 90 Days Away. Is Your Business Ready?
On August 2, 2026, the world's most consequential AI regulation becomes fully enforceable for high-risk systems. With trilogue negotiations still unresolved and no confirmed delay, the clock is ticking, and the stakes couldn't be higher.
In three months, the European Union's Artificial Intelligence Act will impose legally enforceable obligations on every business — regardless of where it is incorporated — that deploys high-risk AI systems affecting EU residents. Penalties for non-compliance reach up to 7% of global annual revenue. And unlike the GDPR's slow regulatory ramp-up, the AI Act's enforcement machinery is already operational.
Yet many businesses are still waiting. Some are watching the EU's "Digital Omnibus" legislative proposal, which would push the deadline to December 2027. That is a gamble. The second political trilogue on the Omnibus concluded on April 28, 2026 — without agreement. A third trilogue is scheduled for May 13. If no formal adoption occurs before August 2, the original Act applies in full. On that date, AI systems already on the market become subject to its requirements, with no retroactive grandfathering window.
"The decision is actually quite critical, because the EU AI Act is not retroactive — AI systems already in the market before the law goes into effect may be grandfathered in and exempt from certain obligations."
What the Act Actually Covers — and Who It Catches
The AI Act uses a risk-based classification. The vast majority of AI tools — spam filters, recommendation engines, basic chatbots — face minimal or no obligations. But the "high-risk" category is broader than most businesses realize. If your AI system is used in any of the following contexts, it almost certainly qualifies:
HIGH RISK AI USE CASES UNDER ANNEX III
Recruitment, CV screening, and hiring decisions
Performance evaluation and employee monitoring
Credit scoring and financial services risk assessment
Insurance underwriting and claims processing
Access to educational opportunities and scoring
Customer service AI with consequential decision authority
Biometric identification and emotion recognition
AI influencing access to essential services
Critically, the Act's extra-territorial reach mirrors the GDPR. A Canadian headquartered SaaS company whose AI-powered hiring tool is licensed to European clients is a provider subject to the full spectrum of compliance requirements — conformity assessments, technical documentation, CE marking, and EU database registration — even if its servers never touch European soil.
The Four Compliance Obligations That Will Catch Companies Off Guard
1. RISK MANAGEMENT SYSTEMS
High-risk providers must implement a continuous, documented risk management system — not a one-time assessment. This means identifying foreseeable risks, testing against them, adopting mitigation measures, and maintaining audit trails. Many organizations are discovering that their existing GRC frameworks need significant retooling to accommodate AI specific risk categories including bias, opacity, and cascading error propagation.
2. DATA GOVERNANCE REQUIREMENTS
Training, validation, and testing datasets must meet stringent quality standards. Relevant statistical properties must be documented. Bias must be identified and addressed. For businesses that deployed AI systems over the last two years without these controls, retroactive documentation is a painful — and often expensive — exercise. Legal counsel familiar with both data protection and AI governance frameworks is essential here, as obligations interact directly with GDPR data minimization and purpose limitation requirements.
3. TECHNICAL DOCUMENTATION AND TRANSPARENCY
Before placing a high-risk AI system on the EU market, providers must produce documentation detailed enough for competent authorities to assess compliance — including the system's intended purpose, design choices, accuracy metrics, and human oversight mechanisms. For organizations relying on third-party AI vendors, this creates urgent contractual issues: vendors must be compelled to provide this documentation, and many current AI procurement contracts contain no such obligations.
4. HUMAN OVERSIGHT MECHANISMS
High-risk AI systems must allow human operators to understand, monitor, and override outputs. This is not simply a UI requirement — it demands organizational process redesign. An AI-assisted employment decision platform, for example, must ensure that a human being with appropriate authority actually reviews and is capable of overriding the system's recommendations, with records kept to prove it.
The IP and Commercial Contract Dimension
The AI Act creates a cascade of intellectual property and commercial contracting obligations that have received less attention than the technical compliance requirements — but are equally urgent.
Businesses that purchase, integrate, or resell AI tools have almost certainly inherited compliance exposure under contracts drafted before the Act's obligations crystallized. Vendor agreements that grant the AI provider broad rights over customer data and outputs may now conflict with the Act's requirements for transparency and human control. Indemnification provisions that seemed adequate under pre-AI commercial frameworks may leave deployers fully exposed to regulatory penalties.
And the IP questions are multiplying. As AI-generated outputs become embedded in commercial products, the question of who owns what — and who bears liability when an AI system produces an output that infringes a third party's rights — is being litigated in courts across the US and EU simultaneously. The Act adds a regulatory liability layer on top of civil IP exposure.
WHAT COUNSEL SHOULD BE REVIEWING NOW
All AI vendor and SaaS procurement contracts for compliance obligations and indemnification gaps
Data processing agreements that touch AI training or inference
Employment contracts and HR policies affected by AI-assisted decisions
IP ownership clauses in any contract involving AI-generated deliverables
Insurance coverage for AI liability and regulatory penalties
Board and management disclosure obligations regarding AI risk
Your AI stack is moving faster than your legal framework.
As a fractional General Counsel specializing in AI, IP, privacy, and commercial law, I work with businesses at exactly this inflection point —providing senior legal judgment without the overhead of a full-time hire. If you're unsure whether your AI systems are in scope, or if your contracts need an urgent review before August, let's talk.