EU AI Act Compliance: What U.S. Organizations Should Know
The EU AI Act has extraterritorial reach — any organization whose AI systems affect people in the EU is in scope, including U.S. companies. It classifies AI systems by risk level with enforcement phased through 2027.
The EU AI Act is the world’s first comprehensive AI regulation. Adopted in 2024 and entering enforcement in phases through 2027, it establishes binding requirements for AI systems based on their risk level. And it does not only apply to European companies — any organization whose AI systems affect people in the EU is in scope.
In short: if your organization deploys, develops, or procures AI systems that touch EU markets or EU individuals, you need to understand and prepare for the EU AI Act.
Why Should U.S. Organizations Care?
The EU AI Act has extraterritorial reach, similar to GDPR. You are in scope if:
- Your AI system’s output is used in the EU — even if the system is hosted in the United States
- You place AI systems on the EU market — including SaaS products with AI features
- You are a deployer of AI systems that affect individuals in the EU — even if you did not build the AI
For organizations with any EU exposure — clients, employees, customers, or partners — the EU AI Act is not optional.
How Does the EU AI Act Classify AI Risk?
The Act uses a risk-based tiered approach:
Unacceptable Risk (Banned)
AI systems that pose a clear threat to fundamental rights:
- Social scoring by governments
- Real-time biometric surveillance in public spaces (with narrow exceptions)
- Manipulation techniques that exploit vulnerabilities
- Emotion recognition in workplaces and educational institutions
High Risk
AI systems used in areas with significant impact on individuals:
- Employment and worker management (hiring, performance evaluation, termination)
- Credit scoring and financial services
- Education (admissions, grading, proctoring)
- Critical infrastructure management
- Law enforcement and immigration
- Healthcare and medical devices
High-risk systems face the most extensive compliance obligations.
Limited Risk
AI systems that interact with people but pose moderate risk:
- Chatbots and virtual assistants (must disclose they are AI)
- Deepfake generators (must label output as AI-generated)
- Emotion recognition systems (must inform subjects)
Minimal Risk
AI systems with negligible risk (e.g., spam filters, game AI). No specific obligations beyond voluntary codes of practice.
What Are the Key Compliance Obligations?
For high-risk AI systems, the EU AI Act requires:
| Requirement | Description |
|---|---|
| Risk management system | Continuous identification and mitigation of AI risks |
| Data governance | Training data quality, relevance, and representativeness |
| Technical documentation | Detailed documentation of the AI system’s design, development, and capabilities |
| Record-keeping | Automatic logging of AI system operations |
| Transparency | Clear information to deployers and users about system capabilities and limitations |
| Human oversight | Mechanisms for human intervention and override |
| Accuracy and robustness | Performance standards including cybersecurity measures |
| Conformity assessment | Pre-market assessment for certain high-risk categories |
For general-purpose AI (GPAI) models — including large language models — there are additional transparency and documentation obligations, with stricter rules for models deemed to pose “systemic risk.”
What Is the Enforcement Timeline?
| Date | Milestone |
|---|---|
| August 2024 | EU AI Act enters into force |
| February 2025 | Prohibited AI practices take effect |
| August 2025 | GPAI model obligations take effect |
| August 2026 | High-risk AI system obligations take effect |
| August 2027 | Full enforcement for all remaining provisions |
Organizations should not wait for the final deadline. Building compliance infrastructure takes time, and regulators expect evidence of progress.
How Does an EU AI Act Review Assessment Work?
A review assessment evaluates your organization’s AI systems and governance against EU AI Act requirements:
- AI System Inventory — Identify all AI systems in scope, their risk classifications, and their EU exposure
- Gap Analysis — Compare current practices against the Act’s requirements for each risk tier
- Compliance Roadmap — Prioritized recommendations for closing gaps before enforcement deadlines
- Documentation Review — Assess existing technical documentation, risk assessments, and governance structures
The assessment is practical and forward-looking — it tells you where you stand today and what to do next.
EU AI Act vs. U.S. AI Governance
| Aspect | EU AI Act | U.S. Approach |
|---|---|---|
| Nature | Binding regulation with penalties | Voluntary frameworks (NIST AI RMF) plus sector-specific rules |
| Scope | All AI systems in the EU | Varies by sector and state |
| Penalties | Up to €35M or 7% of global annual turnover | Varies; no single federal AI penalty framework |
| Timeline | Phased through 2027 | Evolving; state laws emerging |
Many organizations are using the NIST AI RMF as their foundation and layering EU AI Act-specific requirements on top. The frameworks are complementary, and a strong AI RMF implementation covers much of the EU AI Act’s governance expectations.
Genesis IT Solutions provides EU AI Act review assessments for consulting and Internal Audit engagements. Contact us to discuss your AI compliance readiness.
Frequently Asked Questions
- Does the EU AI Act apply to US companies?
- Yes. The EU AI Act has extraterritorial reach similar to GDPR. You are in scope if your AI system's output is used in the EU, you place AI systems on the EU market (including SaaS with AI features), or you deploy AI systems that affect EU individuals.
- What are the EU AI Act risk classifications?
- The Act classifies AI systems into four tiers: Unacceptable Risk (banned), High Risk (strict compliance requirements), Limited Risk (transparency obligations), and Minimal Risk (no specific obligations).
- When does the EU AI Act take full effect?
- Enforcement is phased: prohibited practices took effect February 2025, GPAI obligations August 2025, high-risk system obligations August 2026, and full enforcement August 2027.