EU AI Act

EU AI Act Compliance: What U.S. Organizations Should Know

The EU AI Act has extraterritorial reach — any organization whose AI systems affect people in the EU is in scope, including U.S. companies. It classifies AI systems by risk level with enforcement phased through 2027.

Zack Jones · · Updated · EU AI ActAI governancecompliance

High-risk AI system obligations under the EU AI Act take effect in August 2026. If your organization deploys AI systems that affect EU residents — hiring tools, credit scoring, customer service automation, medical devices — you are in scope regardless of where your servers are hosted.

The Act operates with the same extraterritorial reach as GDPR. U.S. organizations learned that lesson the hard way in 2018. The organizations preparing now are building compliance infrastructure while there is still time to do it methodically. The organizations waiting will build it under enforcement pressure with auditors watching.

In short: if your organization deploys, develops, or procures AI systems that touch EU markets or EU individuals, you need to understand and prepare for the EU AI Act.

Why Should U.S. Organizations Care?

The EU AI Act has extraterritorial reach, similar to GDPR. You are in scope if:

  • Your AI system’s output is used in the EU — even if the system is hosted in the United States
  • You place AI systems on the EU market — including SaaS products with AI features
  • You are a deployer of AI systems that affect individuals in the EU — even if you did not build the AI

For organizations with any EU exposure — clients, employees, customers, or partners — the EU AI Act is not optional.

How Does the EU AI Act Classify AI Risk?

The Act uses a risk-based tiered approach:

Unacceptable Risk (Banned)

AI systems that pose a clear threat to fundamental rights:

  • Social scoring by governments
  • Real-time biometric surveillance in public spaces (with narrow exceptions)
  • Manipulation techniques that exploit vulnerabilities
  • Emotion recognition in workplaces and educational institutions

High Risk

AI systems used in areas with significant impact on individuals:

  • Employment and worker management (hiring, performance evaluation, termination)
  • Credit scoring and financial services
  • Education (admissions, grading, proctoring)
  • Critical infrastructure management
  • Law enforcement and immigration
  • Healthcare and medical devices

High-risk systems face the most extensive compliance obligations.

Limited Risk

AI systems that interact with people but pose moderate risk:

  • Chatbots and virtual assistants (must disclose they are AI)
  • Deepfake generators (must label output as AI-generated)
  • Emotion recognition systems (must inform subjects)

Minimal Risk

AI systems with negligible risk (e.g., spam filters, game AI). No specific obligations beyond voluntary codes of practice.

What Are the Key Compliance Obligations?

For high-risk AI systems, the EU AI Act requires:

RequirementDescription
Risk management systemContinuous identification and mitigation of AI risks
Data governanceTraining data quality, relevance, and representativeness
Technical documentationDetailed documentation of the AI system’s design, development, and capabilities
Record-keepingAutomatic logging of AI system operations
TransparencyClear information to deployers and users about system capabilities and limitations
Human oversightMechanisms for human intervention and override
Accuracy and robustnessPerformance standards including cybersecurity measures
Conformity assessmentPre-market assessment for certain high-risk categories

For general-purpose AI (GPAI) models — including large language models — there are additional transparency and documentation obligations, with stricter rules for models deemed to pose “systemic risk.”

What Is the Enforcement Timeline?

DateMilestone
August 2024EU AI Act enters into force
February 2025Prohibited AI practices take effect
August 2025GPAI model obligations take effect
August 2026High-risk AI system obligations take effect
August 2027Full enforcement for all remaining provisions

Organizations should not wait for the final deadline. Building compliance infrastructure takes time, and regulators expect evidence of progress.

How Does an EU AI Act Review Assessment Work?

A review assessment evaluates your organization’s AI systems and governance against EU AI Act requirements:

  1. AI System Inventory — Identify all AI systems in scope, their risk classifications, and their EU exposure
  2. Gap Analysis — Compare current practices against the Act’s requirements for each risk tier
  3. Compliance Roadmap — Prioritized recommendations for closing gaps before enforcement deadlines
  4. Documentation Review — Assess existing technical documentation, risk assessments, and governance structures

The assessment is practical and forward-looking — it tells you where you stand today and what to do next.

EU AI Act vs. U.S. AI Governance

AspectEU AI ActU.S. Approach
NatureBinding regulation with penaltiesVoluntary frameworks (NIST AI RMF) plus sector-specific rules
ScopeAll AI systems in the EUVaries by sector and state
PenaltiesUp to €35M or 7% of global annual turnoverVaries; no single federal AI penalty framework
TimelinePhased through 2027Evolving; state laws emerging

Many organizations are using the NIST AI RMF as their foundation and layering EU AI Act-specific requirements on top. The frameworks are complementary, and a strong AI RMF implementation covers much of the EU AI Act’s governance expectations.

For MSPs: EU AI Act Creates Urgent Client Demand

If your clients have European customers, employees, or partners, they likely have EU AI Act exposure they have not assessed. Most do not know which of their AI systems qualify as high-risk under the Act’s classification framework.

A Genesis EU AI Act readiness assessment identifies in-scope systems, classifies risk tiers, and produces a compliance roadmap — all under your brand at wholesale pricing. With the August 2026 deadline approaching, this is a time-sensitive service with natural urgency built into the sales conversation.

For vCISOs: AI Regulation Expands Your Advisory Scope

The EU AI Act creates a new compliance domain that most internal teams cannot navigate without guidance. If you are advising organizations with EU exposure, AI Act readiness should be part of your governance program.

Layer it on top of existing NIST AI RMF or ISO 42001 work — the frameworks complement each other. A Genesis assessment handles the technical evaluation (system inventory, risk classification, gap analysis). You handle the strategic response: which systems to modify, which to document, and how to present the compliance posture to leadership.


Genesis delivers EU AI Act readiness assessments that inventory your AI systems, classify risk levels, and map gaps against enforcement deadlines. High-risk system obligations take effect August 2026.

For organizations with EU exposure: Get an AI system inventory and gap analysis before the August 2026 deadline. We will tell you exactly which systems are in scope and what compliance requires.

Contact us to start your EU AI Act assessment.

Frequently Asked Questions

Does the EU AI Act apply to US companies?
Yes. The EU AI Act has extraterritorial reach similar to GDPR. You are in scope if your AI system's output is used in the EU, you place AI systems on the EU market (including SaaS with AI features), or you deploy AI systems that affect EU individuals.
What are the EU AI Act risk classifications?
The Act classifies AI systems into four tiers: Unacceptable Risk (banned), High Risk (strict compliance requirements), Limited Risk (transparency obligations), and Minimal Risk (no specific obligations).
When does the EU AI Act take full effect?
Enforcement is phased: prohibited practices took effect February 2025, GPAI obligations August 2025, high-risk system obligations August 2026, and full enforcement August 2027.