In our previous article, we explored the architecture of a secure AI workflow using the Model Context Protocol (MCP) and SecureAuth’s Microperimeter™. Now, it’s time to build one.
This guide shows you how to implement a simple AI agent that enforces identity-based access before touching any data without making role assumptions or static checks while interacitng with an external API.
Our example: A weather assistant that only returns forecasts to users who have permission to see them.
What You’ll Need to Build a Secure AI Agent
- A SecureAuth trial tenant (sign up [here])
- Ollama (or any LLM you can invoke via API) to run a local model like llama3.2
- Python or Node.js for building the agent and MCP server
- A basic understanding of OAuth2 (scopes, access tokens)
This setup lets you build and test locally with no surprises.
Step 1: Configure Your SecureAuth Tenant
1. Create an OAuth Application
- Go to your SecureAuth dashboard
- Create a new app named “AI Agent”
- Note the client ID, client secret, and OIDC discovery URL
- Set redirect URI to: http://localhost:8000/callback
2. Create the Gateway (Microperimeter)
- Create a new Gateway called mcp-weather-service using the Standalone-Authorizer model
- Start the authorizer following the quickstart guide
- Prepare an json file to define API endpoints:
“api_groups”: [
{
“name”: “mcp-weather-service”,
“id”: “mcp-weather-service”,
“apis”: [
{ “path”: “/get_forecast”, “method”: “GET” },
{ “path”: “/get_alerts”, “method”: “GET” }
]
}
]
}
- Upload it via:
curl -sSLk -X PUT https://localhost:9004/apis \
–header “content-type:application/json” \
–data @apis.json
3. Define Scopes and Audience Claims
- Create scopes like forecast and alerts tied to the mcp-weather-service
- Configure audience claims for each scope
4. Assign Access Policies
- Example: only users in the forecast_users group get the forecast scope
- Policies are enforced at both token issuance and API access
Step 2: Build the MCP Server
Start with a basic MCP server (e.g., fork from Anthropic’s quickstart)
Modify the service to:
- Accept access tokens as a parameter
- Validate token by calling SecureAuth’s Microperimeter’s /request/validate endpoint
- If denied, return a blank or generic result
Step 3: Build the AI Agent
Your agent should do the following:
- Prompt user to log in via OIDC
- Request OAuth scopes: forecast, alerts
- Receive access token upon login
- Call the MCP server, passing the token
- Receive weather data (if allowed)
- Combine user prompt + weather data → send to LLM
- Return final AI-generated response based on real data and identity-aware access
You can spin up a simple front end interface using a tool like Flask, Fast API, or Express.
⚠️ Security Note: This example uses simplified flows for educational purposes. In production, always use secure token storage, PKCE, and HTTPS.
Test Case: Role-Based Access in Action
Let’s look at what happens when two different users log in.
User A (has access):
- Token includes forecast scope
- Agent retrieves weather data
- LLM provides detailed, context-aware answer
User B (no access):
- Token lacks required scope
- API call is blocked
- LLM returns a generic response
All interactions are logged and auditable via SecureAuth’s central dashboard.
Conclusion: From Prototype to Production
With this prototype, you’ve now built:
- A real AI agent secured by modern OAuth flows and identity policy
- Role-based access to critical APIs
- A working integration with SecureAuth’s Microperimeter Authorizer
- Full visibility into every interaction
The takeaway? AI agents can be secure by default—when identity leads and policy enforces it.
Next up: Why authorization (not authentication) is the real control plane for trust in agentic AI systems, and how SecureAuth empowers you to scale securely.