← Back to blog

What Happens If You Do Nothing? Real AI Act Enforcement Scenarios

5 min read

Let's talk about fines. Because that's what you're actually worried about, right?

You've seen the headlines: "Up to €35 million in penalties!" "7% of global revenue!" And you're wondering if your chatbot's disclosure wording is going to bankrupt your company.

Here's the truth: You're probably wrong about enforcement. But not in the way you think.

The Two Myths Everyone Believes

Myth 1: "We're too small to notice."

Wrong. The AI Act doesn't have a minimum company size threshold. Your 40-person company using customer support chatbots has the same transparency obligations as a Fortune 500 using the same technology. National enforcement bodies aren't sorting companies by headcount—they're looking at what you're doing with AI.

Myth 2: "One mistake = massive fine."

Also wrong. The astronomical fines you've read about? Those are maximums reserved for the worst violations: deploying prohibited AI systems (like social credit scoring or real-time biometric surveillance in public spaces), systematic intentional non-readiness, or refusing to cooperate with authorities. Not for getting your chatbot disclosure wording slightly wrong.

The real enforcement picture is more nuanced. And honestly? More manageable than either extreme suggests.

How Enforcement Actually Works

Here's what most SMEs don't realize: there's no "EU AI Police" showing up at your office.

Enforcement happens at the national level. Each EU Member State designates its own market surveillance authority—your country's AI regulator. As of February 2026, many countries still haven't even designated these authorities yet. Hungary and Italy? No designated enforcement bodies at all.

This matters because year one of enforcement is going to be a learning curve for everyone. Regulators included.

The Penalty Structure Nobody Explains Clearly

Yes, the AI Act has serious maximum penalties. But they're tiered based on violation severity:

  • Up to €35 million or 7% of global annual turnover (whichever is higher): For deploying prohibited AI practices. Think facial recognition for mass surveillance, manipulative AI targeting vulnerable people, social credit scoring systems. If you're not building dystopian tech, you're not in this tier.

  • Up to €15 million or 3% of global turnover: For violations of high-risk AI system requirements. This is where things get real for companies building AI that makes important decisions about people—hiring algorithms, credit scoring, medical diagnostics.

  • Up to €7.5 million or 1.5% of global turnover: For providing incorrect or incomplete information to authorities, or failing to cooperate with enforcement investigations.

Here's the critical part SMEs miss: Penalties are proportional to company size and market share. The percentage-of-revenue model means SMEs automatically get lower fines than enterprises. A €50 million company paying 3% is €1.5 million. Still painful, but not the €35 million headline.

And those are maximums. First-time violations, good-faith efforts that fell short, quick corrections when notified—all of these factor into actual penalty decisions.

What Enforcement Will Actually Prioritize (Especially in Year One)

Regulators aren't going to waste limited resources chasing chatbot disclosure formatting errors. They're going to focus on:

1. Prohibited AI practices (already in force since February 2, 2025) Real-time biometric surveillance, emotion recognition in workplaces and schools, AI systems that manipulate vulnerable people. If you're not doing any of this, you're not in the crosshairs.

2. High-risk system violations AI used for hiring, credit decisions, law enforcement, critical infrastructure, medical devices. If your systems make important decisions about people and you've done zero readiness work, you're at higher risk. If you're deploying off-the-shelf tools? Much lower priority.

3. Systematic, intentional non-readiness The company that clearly knows about the AI Act, has been repeatedly informed of obligations, and deliberately ignores them. This is the "we don't care" scenario. Very different from "we tried but got some details wrong."

What they're NOT prioritizing: Imperfect documentation. Chatbot disclosure that's present but not perfectly worded. Transparency efforts that show good faith but aren't flawless.

Year-one reality: Authorities are figuring this out too. Many Member States haven't operationalized their regulatory sandboxes yet, technical standards are incomplete, and enforcement bodies are still hiring staff. Expect guidance, warnings, and opportunities to correct before penalties for companies making genuine efforts.

The Risks That Actually Matter for SMEs (Hint: Not Fines)

Here's what keeps me up at night—and what should concern you more than hypothetical penalties:

Lost customer trust. Your enterprise clients are already getting questionnaires from their legal teams: "Are you AI Act ready?" If you can't answer confidently, that deal goes to your competitor who can.

Competitive disadvantage. When tenders and RFPs start including "demonstrate AI Act readiness" as a requirement, companies with documented readiness win. It's already happening in regulated industries.

Reputational damage. Not from regulators—from customers. "Company X failed basic AI transparency requirements" is not the headline you want, even if no fine was issued.

Blocked opportunities. Some markets and customer segments will require demonstrated readiness as table stakes. No readiness documentation = no access.

These aren't hypothetical. They're happening now in procurement processes across Europe.

What "Getting Caught" Actually Looks Like

Let's walk through a realistic scenario:

You're a 60-person SaaS company. You use AI-powered chatbots for customer support and an AI tool that helps your sales team prioritize leads. You haven't done formal AI Act readiness work.

A customer reports to their national enforcement body that your chatbot doesn't disclose it's AI. The authority contacts you for information.

If you respond like this: "We weren't aware of the specific disclosure requirement. Here's our system documentation showing what AI we use. We've now added disclosure to our chatbot as of [date]. Here's the language we're using. We're working on a full system inventory and risk assessment—here's our timeline."

Likely outcome: Warning. Guidance on proper disclosure format. Follow-up check in 30-60 days. No fine. Your responsiveness and documented good-faith effort matter enormously.

If you respond like this: Ignore the inquiry. Or: "We don't think this applies to us." Or: "We'll get to it eventually."

Likely outcome: Formal investigation. Potential penalty even for a first violation, because you've demonstrated unwillingness to cooperate. Reputational damage from prolonged enforcement action.

The difference isn't perfection—it's attitude and documentation.

The Best Strategy: Document Everything

Here's what actually protects SMEs:

Document your readiness efforts. System inventory. Risk classification decisions. Why you concluded certain systems weren't high-risk. When you implemented transparency disclosures. Your decision-making process.

This isn't just about avoiding enforcement—it's smart business. When a customer, partner, or authority asks "Are you AI Act ready?", you need to show your work.

Show good faith. Regulators will distinguish between "didn't try" and "tried imperfectly." You want to be in the second group. Perfect readiness is impossible when half the technical standards aren't even published yet. But documented efforts to follow available guidance? That matters.

Correct quickly when informed. If an authority or customer points out a gap, fix it fast and document what you changed. Speed of response shows you take obligations seriously.

Enforcement FAQ

Q: Will they really go after small companies? A: They'll go after violations, not company sizes. But enforcement priorities are real. Prohibited AI and high-risk system failures will get attention before transparency disclosure imperfections.

Q: What if we get something wrong despite trying? A: Document that you tried. Show your reasoning. Correct when informed. This is very different from not trying at all.

Q: Should we wait until enforcement bodies are fully operational? A: No. The August 2, 2026 deadline for transparency obligations isn't changing regardless of Member State readiness. And documentation of pre-deadline efforts protects you.

Q: What if the Digital Omnibus delays obligations? A: Transparency obligations (Article 50) aren't affected by the proposed delay. Even if high-risk system requirements get pushed to December 2027, chatbot disclosures and AI-generated content marking still take effect August 2, 2026.

Q: Can we just hire a lawyer when something happens? A: Reactive legal help is expensive and stressful. Proactive readiness documentation is cheaper and actually protects you. By the time you're responding to an enforcement inquiry, you've already lost the "we were prepared" advantage.

The Bottom Line

Enforcement isn't a myth. But it's not a random lightning strike either.

You're not too small to matter. But you're also not one mistake away from bankruptcy. The real risk isn't catastrophic fines—it's competitive disadvantage, lost trust, and blocked opportunities.

The companies that'll struggle most aren't the ones who got disclosure wording imperfect. They're the ones who did nothing, documented nothing, and can't demonstrate any readiness effort when asked.

Document your readiness efforts now. Show good faith. Correct quickly when you learn something new.

That's not just avoiding enforcement—it's good business.

Ready to start? Download our system inventory template and document what AI you're actually using. It's the first step in any defensible readiness effort—and it's something you can do this afternoon.

[CTA: Download System Inventory Template]


This document supports readiness preparation. It does not constitute legal advice.

Word count: 1,574 words Recommended edits before final: Trim to 1,200 words max for better scannability. Consider cutting FAQ section to standalone resource. SEO targets: AI Act enforcement, AI Act fines, what happens if not AI Act ready, AI Act penalties for small businesses, realistic AI Act enforcement

Ready to find out if this applies to you?

The AI Act assessment takes 3 minutes. No signup. You'll see your classification instantly.

Take the assessment

Or stay in the loop

Get updates when rules change. No spam.