← Back to blog

"Good Enough" Readiness — Why Perfection Is the Enemy of AI Act Preparation

5 min read

Your legal team just forwarded you a 458-page PDF about the EU AI Act.

You opened it. You scrolled. You felt a small wave of dread.

Then you closed it and went back to fixing the actual problems your customers have.

I get it. I've been there. But here's what I learned after talking to dozens of founders in your exact position: the most dangerous response to AI Act uncertainty isn't doing it wrong.

It's waiting to do it perfectly.

The Perfectionism Trap

Here's the thing nobody tells you about regulatory readiness: perfection doesn't exist yet.

Only 15 of the 45 technical standards are published. The Digital Omnibus proposal might delay high-risk obligations. The enforcement priorities won't be clear until 2027 at the earliest.

If you're waiting for complete clarity before you start, you're not being strategic. You're procrastinating.

And I say this with empathy, not judgment. Because the instinct makes total sense. When you don't know the rules, it feels safer to wait than to guess wrong.

But regulators don't see it that way.

What Regulators Actually Care About

I spent three weeks reading enforcement precedent from GDPR, MDR, and other EU regulations. Here's the pattern:

Regulators distinguish between two types of non-readiness:

Type 1: Didn't try

  • No documentation
  • No risk assessment
  • No process changes
  • No evidence of awareness

Type 2: Tried imperfectly

  • Documented your AI systems
  • Made a good-faith risk assessment
  • Built basic governance
  • Can show your work

The fines are wildly different. Type 1 gets hammered. Type 2 gets improvement orders and second chances.

Your goal isn't perfection. It's demonstrable good-faith effort.

The Four Readiness Tiers

Most guides treat AI Act readiness like a binary: you're either ready or you're not.

That's not how SMEs actually work. You don't have enterprise budgets. You can't hire a full-time AI governance officer. You need tiers.

Here's how I think about it:

Tier 1: Minimum Viable Readiness

Time investment: One focused weekend

What you build:

  • A list of every AI system you build or use (yes, ChatGPT counts if you're using it for customer data)
  • Basic transparency disclosures ("This feature uses AI")
  • A one-page doc explaining how each system works and what data it uses

Who this is for: Small teams (10-25 people) using off-the-shelf AI tools, not building custom AI systems

Does it cover everything? No. Does it prove you're not ignoring the regulation? Yes.

Tier 2: Bronze Readiness

Time investment: 2-3 weeks, revisited quarterly

What you add:

  • Detailed risk classification (which systems might be "AI that makes important decisions about people")
  • Governance documentation (who's responsible, how you make decisions)
  • Training data documentation (where it came from, how you cleaned it)
  • Basic change log (when you update models or data)

Who this is for: Most 25-75 person companies building AI features (chatbots, recommendations, automation)

Does it cover everything? For limited-risk systems, pretty much. For high-risk, it's your foundation.

This is where most SMBs should aim.

Tier 3: Silver Readiness

Time investment: 1-2 months, ongoing maintenance

What you add:

  • Formal readiness check process (the AI Act calls this a "conformity assessment" — we call it a readiness check)
  • Comprehensive system documentation (architecture, logic, limitations)
  • Impact reviews for systems that affect people significantly
  • Validation testing documentation

Who this is for: Companies building AI that makes important decisions about people — hiring tools, credit decisions, content moderation at scale

Does it cover everything? This is where "ready" actually means ready.

Tier 4: Gold Readiness

Time investment: Ongoing program, dedicated ownership

What you add:

  • Continuous monitoring and incident response
  • Regular training for teams
  • Third-party audits
  • Proactive updates as standards emerge

Who this is for: High-risk AI at scale, regulated industries, enterprise customers demanding it

Does it cover everything? Yes. And it's overkill for most SMBs.

Why "Good Enough" Is Actually Strategic

Let's do the math.

If you wait for perfect clarity:

  • You start in 2027 when enforcement begins
  • You're racing against deadlines
  • You're making decisions under pressure
  • You have zero operational data about whether your approach works

If you start with Bronze readiness now:

  • You build the habit of documentation
  • You identify gaps while there's time to fix them
  • You learn what's hard and what's easy for your team
  • You have 18 months of iteration before serious enforcement

"Good enough" compounds. Perfection doesn't.

The Weekend Readiness Checklist

If you do nothing else this month, do this. It takes 4-6 hours and moves you from Type 1 (didn't try) to Type 2 (tried imperfectly).

Friday night (1 hour):

  • List every AI system you build or use
  • Mark which ones touch customer data
  • Mark which ones make decisions about people

Saturday morning (2 hours):

  • For each system, write three sentences:
    • What it does
    • What data it uses
    • Who can override it (if anyone)

Saturday afternoon (1 hour):

  • Draft basic transparency language for your product
  • "This feature uses AI to [specific thing]. It bases recommendations on [type of data]. You can [how users control it]."

Sunday morning (1 hour):

  • Decide: which systems might be high-risk?
  • If none: you're probably Bronze-track
  • If yes: you're Silver-track, but Bronze is still your starting point

Sunday afternoon (30 min):

  • Put it all in a Google Doc
  • Share it with your team
  • Schedule a quarterly review

That's it. You just moved from "ignoring the regulation" to "demonstrable good-faith effort."

Is it complete? No. Is it a hell of a lot better than nothing? Yes.

What This Looks Like in Practice

Here's a real example (details changed):

Company: 40-person B2B SaaS, uses AI for email categorization and response suggestions

What they did (Bronze tier):

  • Documented three AI systems (categorization, suggestions, search)
  • Classified all three as limited-risk (assists humans, doesn't decide)
  • Added transparency UI: "This suggestion was generated by AI"
  • Created a quarterly review process with their CTO and Head of Product

Time investment: 12 hours initial, 2 hours quarterly

Status: Not perfect. But defensible. And iterating.

They're not losing sleep. They're not hiring consultants. They're just being thoughtful.

That's the goal.

The Permission You're Looking For

You don't need a compliance officer.

You don't need to pause your roadmap.

You don't need to hire a law firm for six figures.

You need to document what you're doing, think through the risks, and show you're taking it seriously.

Bronze readiness isn't about legal perfection. It's about operational honesty.

You probably already know:

  • What AI you're using
  • What it's doing
  • What could go wrong

You just haven't written it down in a structured way.

That's the gap. And it's smaller than you think.

Start This Weekend

The AI Act isn't going away. The standards will keep evolving. The deadlines will keep approaching.

But you don't need to solve everything today.

You need to move from "ignoring it" to "working on it."

And the fastest way to do that? Start small. Start imperfect. Start this weekend.

Because the SMBs that thrive through regulatory change aren't the ones who built perfect programs.

They're the ones who started early, iterated often, and didn't let perfection kill momentum.

Download our Weekend Readiness Checklist and start today.


This document supports readiness preparation. It does not constitute legal advice.

Ready to find out if this applies to you?

The AI Act assessment takes 3 minutes. No signup. You'll see your classification instantly.

Take the assessment

Or stay in the loop

Get updates when rules change. No spam.