EU AI Act

EU AI Act Compliance: What U.S. Organizations Should Know

The EU AI Act has extraterritorial reach — any organization whose AI systems affect people in the EU is in scope, including U.S. companies. It classifies AI systems by risk level with enforcement phased through 2027.

Zack Jones · · EU AI ActAI governancecompliance

The EU AI Act is the world’s first comprehensive AI regulation. Adopted in 2024 and entering enforcement in phases through 2027, it establishes binding requirements for AI systems based on their risk level. And it does not only apply to European companies — any organization whose AI systems affect people in the EU is in scope.

In short: if your organization deploys, develops, or procures AI systems that touch EU markets or EU individuals, you need to understand and prepare for the EU AI Act.

Why Should U.S. Organizations Care?

The EU AI Act has extraterritorial reach, similar to GDPR. You are in scope if:

  • Your AI system’s output is used in the EU — even if the system is hosted in the United States
  • You place AI systems on the EU market — including SaaS products with AI features
  • You are a deployer of AI systems that affect individuals in the EU — even if you did not build the AI

For organizations with any EU exposure — clients, employees, customers, or partners — the EU AI Act is not optional.

How Does the EU AI Act Classify AI Risk?

The Act uses a risk-based tiered approach:

Unacceptable Risk (Banned)

AI systems that pose a clear threat to fundamental rights:

  • Social scoring by governments
  • Real-time biometric surveillance in public spaces (with narrow exceptions)
  • Manipulation techniques that exploit vulnerabilities
  • Emotion recognition in workplaces and educational institutions

High Risk

AI systems used in areas with significant impact on individuals:

  • Employment and worker management (hiring, performance evaluation, termination)
  • Credit scoring and financial services
  • Education (admissions, grading, proctoring)
  • Critical infrastructure management
  • Law enforcement and immigration
  • Healthcare and medical devices

High-risk systems face the most extensive compliance obligations.

Limited Risk

AI systems that interact with people but pose moderate risk:

  • Chatbots and virtual assistants (must disclose they are AI)
  • Deepfake generators (must label output as AI-generated)
  • Emotion recognition systems (must inform subjects)

Minimal Risk

AI systems with negligible risk (e.g., spam filters, game AI). No specific obligations beyond voluntary codes of practice.

What Are the Key Compliance Obligations?

For high-risk AI systems, the EU AI Act requires:

RequirementDescription
Risk management systemContinuous identification and mitigation of AI risks
Data governanceTraining data quality, relevance, and representativeness
Technical documentationDetailed documentation of the AI system’s design, development, and capabilities
Record-keepingAutomatic logging of AI system operations
TransparencyClear information to deployers and users about system capabilities and limitations
Human oversightMechanisms for human intervention and override
Accuracy and robustnessPerformance standards including cybersecurity measures
Conformity assessmentPre-market assessment for certain high-risk categories

For general-purpose AI (GPAI) models — including large language models — there are additional transparency and documentation obligations, with stricter rules for models deemed to pose “systemic risk.”

What Is the Enforcement Timeline?

DateMilestone
August 2024EU AI Act enters into force
February 2025Prohibited AI practices take effect
August 2025GPAI model obligations take effect
August 2026High-risk AI system obligations take effect
August 2027Full enforcement for all remaining provisions

Organizations should not wait for the final deadline. Building compliance infrastructure takes time, and regulators expect evidence of progress.

How Does an EU AI Act Review Assessment Work?

A review assessment evaluates your organization’s AI systems and governance against EU AI Act requirements:

  1. AI System Inventory — Identify all AI systems in scope, their risk classifications, and their EU exposure
  2. Gap Analysis — Compare current practices against the Act’s requirements for each risk tier
  3. Compliance Roadmap — Prioritized recommendations for closing gaps before enforcement deadlines
  4. Documentation Review — Assess existing technical documentation, risk assessments, and governance structures

The assessment is practical and forward-looking — it tells you where you stand today and what to do next.

EU AI Act vs. U.S. AI Governance

AspectEU AI ActU.S. Approach
NatureBinding regulation with penaltiesVoluntary frameworks (NIST AI RMF) plus sector-specific rules
ScopeAll AI systems in the EUVaries by sector and state
PenaltiesUp to €35M or 7% of global annual turnoverVaries; no single federal AI penalty framework
TimelinePhased through 2027Evolving; state laws emerging

Many organizations are using the NIST AI RMF as their foundation and layering EU AI Act-specific requirements on top. The frameworks are complementary, and a strong AI RMF implementation covers much of the EU AI Act’s governance expectations.


Genesis IT Solutions provides EU AI Act review assessments for consulting and Internal Audit engagements. Contact us to discuss your AI compliance readiness.

Frequently Asked Questions

Does the EU AI Act apply to US companies?
Yes. The EU AI Act has extraterritorial reach similar to GDPR. You are in scope if your AI system's output is used in the EU, you place AI systems on the EU market (including SaaS with AI features), or you deploy AI systems that affect EU individuals.
What are the EU AI Act risk classifications?
The Act classifies AI systems into four tiers: Unacceptable Risk (banned), High Risk (strict compliance requirements), Limited Risk (transparency obligations), and Minimal Risk (no specific obligations).
When does the EU AI Act take full effect?
Enforcement is phased: prohibited practices took effect February 2025, GPAI obligations August 2025, high-risk system obligations August 2026, and full enforcement August 2027.