What Is the EU AI Act?
The first AI regulation with teeth. Published July 2024, enforced in phases through 2027 — and it applies to any company serving EU customers, regardless of where you're based.
Here's what it actually means for your business.
Does It Apply to My Company?
If you use AI and serve EU customers — yes. But "applies" doesn't mean what you think.
Most SMBs land in one of two buckets:
Transparency Obligations
Customers interact with your AI — chatbots, AI-generated content, synthetic media. You need to tell them it's AI. That's a disclosure task, not a documentation project.
General AI Use
Internal tools, analytics, recommendations that don't decide anything about people. Since February 2025, one rule: your team needs to understand what AI they're using.
The heavy requirements — system documentation, formal risk assessments, third-party audits — only kick in if your AI makes consequential decisions about people. Hiring algorithms, credit scoring, medical diagnostics. Most SMBs aren't here.
The Risk Levels
Prohibited Practices
Social credit scoring, real-time public facial recognition, emotion inference at work or school. Manipulation, exploitation, mass surveillance.
Don't build these. You almost certainly aren't.
High-Risk AI
AI that decides who gets hired, who gets a loan, who gets medical treatment, who gets access to essential services. The stakes-matter category.
Document the system, implement human review, maintain decision logs, train your users.
Transparency Rules
Chatbots, AI-generated text, images, video, audio, deepfakes, biometric categorization. If people interact with your AI or consume what it creates, they need to know.
Disclose AI in conversations. Label synthetic content. Mark deepfakes clearly.
General Use
Internal tools, analytics, spam filters, recommendations. The AI that runs in the background without deciding anyone's fate.
List your AI tools. Train your team on limitations. Done.
Key Deadlines
The date that matters for most companies: August 2, 2026.
February 2, 2025
Already in forceProhibited practices banned. AI literacy required for all deployers.
August 2, 2025
General-purpose AI model rules apply. National governance bodies operational.
August 2, 2026
Transparency obligations and high-risk requirements enforceable. This is the one.
August 2, 2027
High-risk AI in regulated products (medical devices, machinery) fully enforceable.
What Do I Actually Need to Do?
Chatbots & AI Content
A few hours to a weekend
- –Disclosure language for AI interactions
- –Machine-readable labels on synthetic images, video, audio
- –Internal log of AI systems in use
AI That Decides About People
2–4 weeks initial, then ongoing
- –System documentation (what, how, what data)
- –Risk assessment (what goes wrong, how you manage it)
- –Human oversight process (who reviews, when, how)
- –Training data docs + decision logs (6+ months retention)
Internal Tools Only
A few hours
- –Inventory of AI tools in use (yes, including ChatGPT)
- –Basic team training on AI limitations
Most SMBs handle this in a weekend to a few weeks. Real work, but not a multi-year programme.
Act-Ready handles all of this for you — usually in under an hour.
Resources
Practical guides for the obligations most SMBs actually face.
Chatbot Disclosure Template Pack
10 ready-to-use disclosure templates for AI-powered chatbots and conversational interfaces.
Free downloadShould You Wait for the Digital Omnibus?
A decision framework for the Digital Omnibus timing question.
Free downloadAI Act Technical Standards
Which technical standards are available and which are still in development.
Free downloadWeekend Readiness Checklist
Your 4–6 hour path to minimum viable AI Act readiness.
Free downloadThe Bottom Line
The AI Act isn't going away. August 2, 2026 isn't moving. Most SMBs aren't building high-risk AI — most obligations are transparency rules, not documentation marathons.
You don't need perfect readiness. You need to show you're taking it seriously.
This page supports readiness preparation. It does not constitute legal advice.