I’m Based in the US — Why Does the EU AI Act Apply to My Python App?

Six months ago, I would have laughed if you told me a European regulation applies to my side project hosted on a $5 DigitalOcean droplet in New York.

I’m not laughing anymore.

The Clause Nobody Reads

Everyone talks about Article 6 (risk classification) and Article 52 (transparency). But Article 2 — the one that defines who this regulation covers — gets skipped.

Here’s the relevant bit, paraphrased:

The EU AI Act applies to providers and deployers of AI systems, regardless of whether they are established in the EU, if the output produced by the AI system is used in the EU.

Read that again. Regardless of whether they are established in the EU.

If a single user in France, Germany, or any of the 27 EU member states uses your AI-powered app, you’re in scope.

Wait, Isn’t This Just GDPR All Over Again?

Yes. Same playbook.

In 2018, GDPR caught thousands of US companies off guard. “We don’t have EU servers” was not a defense. “We don’t specifically target EU users” was barely a defense.

The EU AI Act does the same thing, but for AI systems instead of personal data. And the penalties are steeper: up to €35 million or 7% of global annual turnover (GDPR maxes out at €20M or 4%).

A 5-Question Self-Check

Before you spend hours reading legal text, run through this:

  1. Can someone in the EU access your AI feature? (even via API)
  2. Do you have any EU-based customers or users?
  3. Does your terms of service NOT explicitly exclude EU users?
  4. Is your AI model deployed by a third party that serves EU users?
  5. Do you process data from EU-based sources?

One YES = you’re likely in scope.

For most indie devs and small SaaS products: the answer to question 1 is almost always yes. Unless you’re actively geo-blocking EU IP addresses, you’re probably in scope.

So I’m in Scope. Now What?

Don’t panic. Being in scope doesn’t mean heavy compliance for everyone. It depends on your risk level:

  • Most AI apps (chatbots, recommendation engines, content tools): Minimal risk. You just need basic transparency — tell users they’re interacting with AI.
  • Some AI apps (HR screening, credit scoring, medical diagnosis): High risk. Full compliance stack required.
  • A few AI apps (social scoring, manipulation): Banned outright.

Here’s a quick check I wrote to determine scope:

EU_COUNTRIES = {
    "AT", "BE", "BG", "HR", "CY", "CZ", "DK", "EE", "FI", "FR",
    "DE", "GR", "HU", "IE", "IT", "LV", "LT", "LU", "MT", "NL",
    "PL", "PT", "RO", "SK", "SI", "ES", "SE"
}

def am_i_in_scope(config: dict) -> dict:
    """Quick EU AI Act Article 2 scope check."""
    triggers = []

    if config.get("provider_country") in EU_COUNTRIES:
        triggers.append("You are established in the EU")

    if config.get("has_eu_users"):
        triggers.append("Your AI output reaches EU users")

    if config.get("eu_accessible") and not config.get("geo_blocks_eu"):
        triggers.append("AI service accessible from EU without geo-block")

    if config.get("third_party_eu_deployment"):
        triggers.append("Third party deploys your model in the EU")

    return {
        "in_scope": len(triggers) > 0,
        "triggers": triggers,
        "risk_level": "Check Article 6" if triggers else "N/A",
        "deadline": "August 2, 2026"
    }

# Example: US-based SaaS with global users
result = am_i_in_scope({
    "provider_country": "US",
    "has_eu_users": True,
    "eu_accessible": True,
    "geo_blocks_eu": False,
    "third_party_eu_deployment": False
})
print(result)
# {'in_scope': True, 'triggers': ['Your AI output reaches EU users',
#  'AI service accessible from EU without geo-block'],
#  'risk_level': 'Check Article 6', 'deadline': 'August 2, 2026'}

What I Actually Did

I ran my own project through an automated scanner. It caught three things:

  1. No scope determination documented — I hadn’t even asked “does this apply to me?”
  2. No transparency notice — my chatbot feature didn’t disclose it was AI-generated
  3. No risk classification — I assumed “small project = no risk” (wrong framing)

The fix took about 2 hours. Adding a transparency notice was 10 lines of code. Documenting the risk classification was a markdown file. The scope determination was the script above.

If you want to automate the check, I built a free open-source tool that scans Python codebases for compliance gaps: mcp-eu-ai-act on GitHub.

The Timeline That Matters

  • February 2025: Prohibited AI practices already banned
  • August 2025: General-purpose AI rules in effect
  • August 2, 2026: Full enforcement — including high-risk system requirements

If you’re reading this in 2026, you have months, not years.

TL;DR

  • Article 2 gives the EU AI Act GDPR-style global reach
  • If EU users can access your AI feature, you’re likely in scope
  • Most apps are minimal risk (transparency only, not a huge burden)
  • Don’t wait for August 2026 to find out — run the check now

Leave a Reply