Last week I showed you your AI coding agent can read your SSH keys. Turns out that was the easy part. I run 5 MCP servers con…

The Setup

MCP (Model Context Protocol) lets AI agents call external tools. Instead of just reading files and running bash, the agent gets structured access to APIs, databases, and services. Here’s what a typical multi-server config looks like:

{
  "mcpServers": {
    "automation": { "command": "npx", "args": ["workflow-automation-mcp"] },
    "database-main": { "command": "npx", "args": ["database-mcp"] },
    "database-secondary": { "command": "npx", "args": ["database-mcp"] },
    "code-graph": { "command": "npx", "args": ["code-graph-mcp"] },
    "docs": { "command": "npx", "args": ["docs-mcp"] }
  }
}

Five servers. Two database projects. One workflow automation instance running dozens of production workflows. A code graph analyzer. A documentation fetcher.

What Made Me Stop and Audit

I was debugging a workflow late at night. My agent needed to check why a cron job wasn’t firing. So it ran a SQL query against my production database. Then another. Then it modified a workflow node. Then it fetched execution logs containing customer email addresses.

All of it happened automatically. No confirmation prompts. No approval gates. I had auto-approved every read operation across all five servers. The agent was doing exactly what I asked. That was the problem. I had never asked myself what else it could do.

What Each Server Can Actually Do

A workflow automation server commonly exposes 15-20 operations. Tools like create_workflow, update_workflow, delete_workflow, test_workflow. Your agent can create new automations, modify running ones, or delete them entirely. It can read execution logs containing customer data.

A database server typically exposes execute_sql. That’s the big one. Arbitrary SQL against your production database. SELECT, INSERT, UPDATE, DELETE. It can read every table. It can apply migrations to alter schema. Two connected projects means two databases, both wide open to any query the agent constructs.

A code analysis server can run graph queries against a model of your entire codebase. Every function, every import, every dependency relationship.

A documentation server fetches live docs. Lower risk, but still a vector. Any documentation page it fetches could contain prompt injection payloads.

My 5 Safeguards

1. Scoped permissions. My settings file now has explicit allow-lists. Read operations are auto-approved. Write operations require manual confirmation every time. This one change would have caught the late-night incident.

2. Deny lists. curl, wget, ssh, python3, node are all blocked in bash. The agent cannot make outbound HTTP requests or spawn interpreters.

3. PreToolUse hooks. Three scripts run before every tool call. One catches data exfiltration patterns. One blocks access to .env, .ssh, and key files. One prevents the agent from editing its own security rules.

4. Network isolation. Services run in Docker containers on private networks. MCP servers connect through API keys, not direct database access.

5. Operational safety rules. A document loaded at every session listing which operations are safe and which corrupt data. Certain operations are explicitly banned because they’ve caused production outages.

The Real Risk

The danger isn’t your AI deciding to drop your database. It’s prompt injection through tool results. Your agent calls execute_sql and gets back a result. That result is now in the agent’s context. A crafted payload in a database field or a fetched documentation page could instruct the agent to do something you didn’t ask for. Every MCP tool is an injection surface.

Still Worth It

I use all 5 servers daily. The productivity gain is massive. I manage dozens of workflows, multiple databases, and a full codebase from a single conversation. But I spent a full day building the permission layer around it. Audit your MCP configs. Count the tools. Check what’s auto-approved. The answer will probably surprise you.

Leave a Reply