Does the EU AI Act Apply to My Canadian Startup? (A 3-Step Test)
The "Brussels Effect" is No Longer Hypothetical
For Canadian AI founders, the regulatory landscape shifted dramatically this month. While you may be focused on scaling in North America, the EU AI Act has entered a critical phase of enforcement that likely captures your business, regardless of where your servers are located.
As of November 2025, we are in a complex transitional period. The bans on "unacceptable risk" AI have been live since February; the strict rules for General Purpose AI (GPAI) took effect in August; and just this week (Nov 19, 2025), the European Commission dropped a "Digital Omnibus" proposal that may delay the looming deadlines for High-Risk systems.
It is a chaotic moment for compliance. To cut through the noise, we have developed a 3-Step Test to help Canadian startups determine if they are in the "danger zone."
Step 1: The "Output" Test (Jurisdiction)
The most dangerous misconception we hear from Canadian clients is: "We don't have an office in Europe, so we're safe."
Under Article 2, the EU AI Act applies extraterritorially. You are subject to the Act if you meet any of these criteria:
Placing on the Market: You sell your AI software to customers in the EU.
Putting into Service: You use your own AI tool for business operations within the EU.
The "Output" Trap (Crucial for SaaS): This is the catch-all. Even if you are entirely based in Canada, the Act applies if the output produced by your system is used in the EU.
Real-World Scenario:
A Toronto-based health-tech startup uses AI to analyze X-rays. They sell their software exclusively to a US hospital chain. However, that US chain uses the software to diagnose a patient at their satellite clinic in Paris.
Result: Because the output (the diagnosis) is used in the EU, the Canadian startup is subject to the EU AI Act.
Step 2: The "Risk" Test (Status Check)
If Step 1 applies to you, you must determine what you are building. The compliance timeline is staggered based on risk. Here is the status as of November 2025:
A. Prohibited AI (Status: BANNED since Feb 2, 2025)
If your product does any of the following, you are already non-compliant if you touch the EU market:
Untargeted Scraping: Building facial recognition databases (Clearview AI style).
Emotion Recognition: Using AI to infer emotions in workplaces or schools.
Social Scoring or Behavioral Manipulation.
B. General Purpose AI / GPAI (Status: LIVE since Aug 2, 2025)
If you build "foundation models" (like LLMs) or powerful generative systems, your compliance deadline has passed.
Obligations: You must already maintain technical documentation, comply with EU copyright law, and publish a detailed summary of your training data.
Systemic Risk: If your model used $>10^{25}$ FLOPs to train, you face additional safety testing and incident reporting requirements.
C. High-Risk AI Systems (Status: UPCOMING / IN FLUX)
This category includes AI used in HR, Education, Critical Infrastructure, Credit Scoring, or Law Enforcement.
Original Deadline: August 2, 2026.
The Nov 19, 2025 Update: On Wednesday, the European Commission proposed a "Digital Omnibus" that would delay this deadline (potentially by 16 months to late 2027) due to a lack of finalized technical standards.
Our Advice: Do not bank on this delay yet. It is currently just a proposal. Until the European Parliament and Council officially adopt the amendment, the legal deadline remains August 2026. Proceed with your gap analysis now.
Step 3: The "Role" Test (Responsibility)
Finally, are you a Provider or a Deployer?
Provider: You developed the AI (or had it developed) and market it under your brand.
Burden: High. You are responsible for CE marking, quality management systems, and human oversight logs.
Deployer: You are a company using AI (e.g., a bank using an off-the-shelf resume scanner).
Burden: Moderate. You must ensure the system is used according to instructions and ensure human oversight is active.
Note for Startups: If you are a SaaS company licensing your tool to others, you are the Provider. You bear the brunt of the regulatory cost.
Conclusion: The "Wait and See" Era is Over
While the proposed delay for High-Risk systems offers a glimmer of hope for a simpler timeline, the core frameworks for Prohibited AI and General Purpose AI are fully operational.
If you are a Canadian startup with EU users (or EU-destined outputs), ignoring this regulation exposes you to fines of up to €35 million or 7% of global turnover, whichever is higher.
Your Next Move:
Start with the "Output Test." Look at your user logs. If you see IP addresses in Frankfurt, Paris, or Dublin, you need to classify your risk immediately.