Amazon Bedrock AgentCore Policy – Secure your “MCP Server/Tools” of your Agents using NLP

Amazon Bedrock AgentCore is AWS Developer-Oriented Platform for Building, Deploying, Scaling and Managing your AI Agent Production-Scale.

Amazon Bedrock AgentCore

In this year re:Invent at Las Vegas, AWS announced a number of updates to Bedrock AgentCore included AgentCore Policy – secure what your Agent can do or call and AgentCore Evaluation to provide Assessment to your Agent system on tools perform specific tasks on different inputs and context. In this article, we will talk about AgentCore Policy – adding a top layer of secure your AI System (addition layer of Bedrock Guardrail (Build responsible AI applications with Amazon Bedrock Guardrails | Artificial Intelligence ).

Just imaging the Bedrock Guardrail is for security if your input and output of your agent, but agent not only “talk” but it can “do” things, so you need to put the barrier or rule that limit what your agent “can do”

“The way I think about it … is it controls what the agent is allowed to ask the tool to do. At the low level, you’ve got [identity access management], which says these are the tools that can be used. With Policy, you’ve got what you can ask the tool to do — and then with our existing Bedrock Guardrails, you can control what the LLM will say back to the end user,” Richardson explained.

AgentCore Policy at re:Invent

Your AI Agent can call tools, execute code, automating workflows,.. to solve business problems with flexibility. But this occur a security challenge that agent may misinterpret business rules, or act outside their given permission.

For instance: You connect your customer AI Agent to their Google Drive Account with full permission to write, read and delete file, but you don’t the agent “accidentally” delete some of you customer’s file so you need some to limit your capabilities of your Agent

Trust, but verify

With Amazon Bedrock AgentCore Policy, developers can create policy engines, create and store deterministic policies in them and associate policy engines with gateways. AgentCore Policy intercepts all agent traffic through Amazon Bedrock AgentCore Gateways and evaluates each request against defined policies in the policy engine before allowing tool access.

Bedrock AgentCore Policy Console

Policies are constructed using Cedar language, an open source language for writing and enforcing authorization policies. This allows developers to precisely specify what agents can access and what actions they can perform.

Amazon Bedrock AgentCore Policy provides the capability to author policies using natural language by allowing developers to describe rules in plain English instead of writing formal policy code in Cedar. Natural language-based policy authoring interprets what the user intends, generates candidate policies, validates them against the tool schema, and uses automated reasoning to check safety conditions such as identifying policies that are overly permissive, overly restrictive, or contain conditions that can never be satisfied – ensuring customers catch these issues before enforcing policies.

To create a policy, you can start with a natural language description (that should include information of the authentication claims to use) or directly edit Cedar code.

AgentCore Policy Add

Natural language-based policy authoring provides a more accessible way for you to create fine-grained policies. Instead of writing formal policy code, you can describe rules in plain English. The system interprets your intent, generates candidate policies, validates them against the tool schema, and uses automated reasoning to check safety conditions—identifying prompts that are overly permissive, overly restrictive, or contain conditions that can never be satisfied.

Unlike generic large language model (LLM) translations, this feature understands the structure of your tools and generates policies that are both syntactically correct and semantically aligned with your intent, while flagging rules that cannot be enforced. It is also available as a Model Context Protocol (MCP) server, so you can author and validate policies directly in your preferred AI-assisted coding environment as part of your normal development workflow. This approach reduces onboarding time and helps you write high-quality authorization rules without needing Cedar expertise.

permit(
principal is AgentCore::OAuthUser,
action == AgentCore::Action::"RefundTool__process_refund",
resource == AgentCore::Gateway::"<GATEWAY_ARN>"
)
when {
principal.hasTag("role") &&
principal.getTag("role") == "refund-agent" &&
context.input.amount < 200
};

The following sample policy uses information from the OAuth claims in the JWT token used to authenticate to an AgentCore gateway (for the role) and the arguments passed to the tool call (context.input) to validate access to the tool processing a refund. Only an authenticated user with the refund-agent role can access the tool but for amounts (context.input.amount) lower than $200 USD.

For more examples please read AWS Bedrock Agentcore Documentation here : https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/example-policies.html

This is the end of my article about AgentCore Policy, i will make a simple demo video on my youtube video later about this services on : Thank you for your reading!!!

References :
Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents | AWS News Blog

Amazon Bedrock AgentCore Policy: Evaluate your agent – Amazon Bedrock AgentCore

Build responsible AI applications with Amazon Bedrock Guardrails | Artificial Intelligence

AWS’ New Policy Layer in Bedrock AgentCore Makes Sure AI Agents Can’t Give Away the Store – The New Stack

Leave a Reply