We are living through a gold rush of AI tooling. Every week brings a new standard or protocol promised to revolutionize how Large Language Models interact with our infrastructure. The current darling of this movement is the Model Context Protocol (MCP).
The promise of MCP is seductive as it offers a standardized way for AI assistants to connect to data sources and tools. Theoretically it should be the missing link that turns a chatty LLM into a capable DevOps engineer.
After spending significant time integrating these tools I have come to a controversial conclusion. When it comes to managing platforms with a massive surface area of REST APIs like AWS or Kubernetes Command Line Interfaces (CLIs) are giving MCP servers tough competition.
At this moment there is no clear evidence that LLMs work more efficiently or faster simply because they are accessing an API through an MCP server rather than a standard CLI.
Let us break down why the CLI might actually be the superior tool for your AI agents and where the current implementation of MCP is falling short.
The Friction of MCP Servers
On paper MCP sounds cleaner but in practice specifically for platform engineering and DevOps it introduces a layer of friction that we simply do not see with mature CLIs.
1. The Discovery and Configuration Nightmare
The first hurdle is simply getting started. With an MCP-based workflow you are responsible for discovering and configuring the specific server for your needs. This sounds trivial until you realize that for any major platform the ecosystem is fragmented.
If you are new to a platform you do not know which community-maintained MCP server is the correct one. You have to hunt through repositories and check commit history to hope the maintainer has not abandoned the project.
On platforms with a large number of REST APIs finding the correct MCP server becomes a legitimate taxonomy problem. Unlike a monolithic CLI where the provider name usually covers everything MCP servers are often split by domain or service. You might end up needing five different servers just to manage one cloud environment.
2. The Lack of Shared Configuration
One of the biggest pain points we are seeing today is the lack of shared configuration.
If I configure my AWS CLI profile in my home directory every tool on my machine from Terraform to the Python SDK respects that configuration.
With MCP you cannot currently configure a server once and use it across all clients. You configure it for VS Code then you configure it again for Windsurf and then again for Cursor. It is a violation of the DRY principle for your local development environment.
3. The Wrapper Trap and Incomplete API Coverage
Most MCP servers today are essentially wrappers around existing REST APIs. The problem is that they are rarely complete wrappers.
Building an MCP server that covers the entire surface area of a cloud provider is a massive undertaking. As a result most maintainers expose only a small subset of the underlying endpoints which are usually just the ones they needed personally.
This leads to a frustrating developer experience where you ask your AI agent to perform a task and the agent checks its tools only to find the specific function is missing. You are then forced to context switch back to the CLI or Console to finish the job. If your autonomous workflow requires manual intervention 30% of the time because of missing endpoints it is not autonomous.
4. The Maintenance Burden
MCP servers need to be updated regularly. This is no different from CLIs or Terraform providers but the scale of the problem is different.
Because the ecosystem is fragmented you are not just updating one binary. You might be managing updates for a dozen different micro-servers all evolving at different speeds. If the underlying REST API releases a new feature you are stuck waiting for the MCP server maintainer to pull that update in.
5. Read Only Limitations and Local Constraints
A surprising number of MCP servers act primarily as read-only interfaces. They are great for chatting with your data but terrible for doing actual work.
Many current implementations only support local mode and work with a single set of user credentials. In complex DevOps environments where we juggle multiple roles and cross-account access this single profile limitation is a dealbreaker.
6. Inefficient Token Usage
This is a technical nuance that often gets overlooked. MCP clients typically send the prompt along with all configured tool specifications to the LLM.
If you have a robust MCP server with 50 tools the JSON schema for those 50 tools consumes a significant chunk of your context window and your wallet on every single turn of the conversation even if the agent only needs to use one simple tool.
The Case for the Humble CLI
While the industry chases the new shiny object the humble CLI has quietly perfected the art of programmatic interaction over the last 30 years.
1. The Ultimate Vibe Coding Tool
The beauty of a CLI is its portability. You configure it once on your machine handling your keys and profiles and it is instantly available to any tool that has shell access.
Whether you are using a strictly CLI-based agent or an IDE-integrated assistant the CLI is the universal language. It does not care if you are using VS Code or Vim because if the shell can see it the agent can use it.
2. Unified Installation and Full Coverage
When you install the Azure CLI or the Google Cloud SDK you are installing a single binary that provides nearly 100% coverage of that platform’s REST APIs.
You do not need to hunt for an S3 MCP Server and an EC2 MCP Server separately. You install one tool and you have the power of the entire cloud platform at your agent’s fingertips. This monolithic approach reduces cognitive load for the human and reduces tool hunting errors for the AI.
3. Solved Problems Including Auth and Transport
CLIs have spent decades solving the hard problems. Authentication including MFA and SSO is handled natively. Transport means no need to debug WebSocket connections or JSON-RPC errors between an MCP host and client. Upgrading a single CLI is infinitely simpler than managing a fleet of disparate MCP servers.
4. No Fallback Friction
Because official CLIs are usually maintained by the platform vendors themselves they are first-class citizens. You rarely encounter a situation where the CLI cannot do something the API allows.
This reliability is crucial for agentic workflows. When an agent uses a CLI you avoid the scenario where it tries and fails due to an unsupported method.
Conclusion
We are in the early days of AI protocol standardization and MCP is an exciting development that may eventually mature into the standard we need. However we build systems for today not for a hypothetical future.
If an agentic tool has access to a CLI using it instead of one or more MCP servers currently leads to faster execution significantly lower maintenance and higher reliability.
Sometimes the best tool for the future is the one we have been using for decades.
