How to Secure Your MCP Server: OAuth 2.1 Best Practices for 2025

How to Secure Your MCP Server: OAuth 2.1 Best Practices for 2025
π TL;DR
MCP servers must implement OAuth 2.1 with PKCE (S256 method mandatory)
Use RFC 8707 resource indicators to prevent cross-server token misuse
Implement role-based access control (RBAC) at the tool level, not just authentication
Never store tokens in plain textβuse short-lived access tokens with refresh rotation
Consider delegating authentication to established identity providers (Auth0, Keycloak)
Introduction: Why MCP Security Matters
The Model Context Protocol (MCP) enables AI agents to access external tools and data sourcesβfrom your Google Search Console to internal databases. This power comes with significant security implications: a compromised MCP server could expose sensitive data, execute unauthorized actions, or become a vector for prompt injection attacks.
The March 2025 MCP specification update introduced comprehensive OAuth 2.1 requirements. As Auth0 notes, "Mandating PKCE for all clients significantly raises the bar for security, protecting against common attacks right out of the box."
This guide covers the security requirements, attack vectors, and implementation patterns you need to build production-ready MCP servers.
The Five Critical Vulnerabilities
Before diving into solutions, understand what you're defending against. According to InfraCloud's security analysis (July 2025), MCP servers face five primary vulnerabilities:
Vulnerability | Risk | Impact |
|---|---|---|
Broad Access Tokens | Long-lived static credentials grant unrestricted access | Token theft compromises all tools |
Missing Tenant Isolation | Multi-user setups lack proper data separation | Cross-tenant data leaks |
Inadequate Rate Limiting | AI agents can overwhelm servers rapidly | DoS, resource exhaustion |
Unverified Tool Updates | Tool behavior changes without notification | Silent privilege escalation |
Lack of Auditing | No visibility into access patterns | Delayed breach detection |
β οΈ Watch Out: The binary approach to MCP securityβwhere tokens either grant full access or noneβfails both authentication and access control requirements. Tokens don't identify specific users, and they typically grant unrestricted access to all available tools.
OAuth 2.1: The Foundation
The MCP specification mandates OAuth 2.1 compliance for HTTP-based transports. This isn't optionalβit's the baseline for any production deployment.
Required Standards
Your MCP server must conform to these standards:
Standard | RFC | Purpose |
|---|---|---|
OAuth 2.1 | draft-ietf-oauth-v2-1-13 | Core authorization framework |
Authorization Server Metadata | RFC 8414 | Automatic endpoint discovery |
Dynamic Client Registration | RFC 7591 | Programmatic client onboarding |
Protected Resource Metadata | RFC 9728 | Resource server discovery |
Resource Indicators | RFC 8707 | Token audience binding |
The Three-Step Discovery Flow
When an AI client connects to your MCP server, this flow occurs automatically:
1. Client requests MCP endpoint
β Server returns 401 with WWW-Authenticate header
2. Client fetches Protected Resource Metadata (RFC 9728)
β Discovers Authorization Server location
3. Client fetches AS Metadata (RFC 8414)
β Gets authorization/token endpoints, registers via DCR
4. Standard OAuth flow completes
β Client receives scoped access token
As AWS explains (June 2025), this automated discovery "enables true plug-and-play connectivity between clients and servers" without manual credential configuration.
PKCE: Mandatory, Not Optional
Proof Key for Code Exchange (PKCE) is required for all MCP clients. The specification is explicit:
π‘ Pro Tip: MCP clients MUST use the
S256code challenge method when technically capable. The plain method is only permitted when S256 is impossible.
PKCE Implementation Checklist
[ ] Generate cryptographically random code_verifier (43-128 characters)
[ ] Compute code_challenge using SHA-256 (S256 method)
[ ] Include code_challenge in authorization request
[ ] Include code_verifier in token request
[ ] Verify PKCE support via server metadata before proceeding
Why PKCE Matters
Without PKCE, authorization codes can be intercepted and exchanged by attackers. This is especially critical for MCP because:
Public clients: Many AI tools (Claude Desktop, Cursor) are public clients without client secrets
Redirect interception: Authorization code interception is a known attack vector
Mobile/desktop apps: These environments can't securely store client secrets
RFC 8707: Preventing Cross-Server Token Theft
One of the most important security features in MCP is the resource parameter from RFC 8707. This binds tokens to specific servers, preventing a critical attack vector.
The Attack Scenario
Without resource indicators:
User connects to malicious MCP server
evil.com/mcpEvil server receives a token that might work on
legitimate.com/mcpAttacker uses stolen token against legitimate server
The Solution
With RFC 8707, tokens are bound to their intended audience:
Authorization Request:
GET /authorize?
response_type=code&
client_id=abc123&
resource=https://mcp.example.com/mcp β Target server
Token Request:
POST /token
grant_type=authorization_code&
code=xyz789&
resource=https://mcp.example.com/mcp β Must match
π Key Finding: According to the MCP specification, "Credentials issued for one server can't be misused by another" when resource indicators are properly implemented.
Implementation Pattern
Here's how to validate the resource parameter server-side:
// In your token endpoint
function validateTokenRequest(req: Request) {
const resource = req.body.resource
const expectedResource = 'https://mcp.example.com/mcp'
if (resource !== expectedResource) {
return { error: 'invalid_target', status: 400 }
}
// Token is audience-bound
return issueToken({ audience: resource })
}
Token Handling Best Practices
Access Token Rules
Requirement | Implementation |
|---|---|
Transmission |
|
Never in URI | Tokens must not appear in query strings |
Validation | Verify token was issued for YOUR server as audience |
Short-lived | Access tokens should expire in minutes, not hours |
Error Response Codes
Status | When to Use |
|---|---|
401 | Missing token, invalid token, expired token |
403 | Valid token but insufficient scopes |
400 | Malformed authorization request |
When returning 403, include required scopes in the WWW-Authenticate header:
WWW-Authenticate: Bearer realm="mcp",
scope="gsc:read gsc:write",
error="insufficient_scope"
Attack Vectors and Mitigations
Beyond standard OAuth vulnerabilities, MCP introduces unique attack surfaces.
1. Tool Poisoning
Attack: Malicious instructions embedded in tool descriptions influence LLM behavior.
Mitigation:
Validate tool descriptions at registration time
Use content security policies for tool outputs
Implement tool description versioning with change alerts
2. Silent Redefinition ("Rug Pulls")
Attack: Tools alter their behavior after initial approval, performing different actions than advertised.
Mitigation:
Hash tool definitions and alert on changes
Require re-approval after definition changes
Log all tool invocations with full parameters
3. Cross-Server Shadowing
Attack: Malicious servers register tools with names matching trusted servers, intercepting calls.
Mitigation:
Namespace tools with server identifier
Maintain allowlists of trusted servers
Use TLS certificate pinning for critical connections
4. Prompt Injection via Tools
Attack: Tool outputs contain instructions that manipulate the LLM's behavior.
Mitigation:
Sanitize all tool outputs before returning to LLM
Use structured data formats instead of free text
Implement output length limits
β οΈ Watch Out: As Auth0 warns, "User input is not sanitized and that could lead into injection attacks." Never trust tool inputs or outputs without validation.
Role-Based Access Control (RBAC)
Authentication alone isn't enough. You need fine-grained access control at the tool level.
The RBAC Model
Roles β Permissions β Tools
admin β [manage:*, read:*, write:*] β All tools
analyst β [read:analytics, read:pages] β get_search_analytics, get_top_pages
viewer β [read:basic] β list_sites only
Implementation Pattern
// Define permissions per tool
const toolPermissions = {
'get_search_analytics': ['read:analytics'],
'submit_sitemap': ['write:sitemaps'],
'delete_site': ['manage:sites', 'admin']
}
// Check before tool execution
async function executeTool(tool: string, user: User) {
const required = toolPermissions[tool]
const userPerms = await getUserPermissions(user)
if (!required.some(p => userPerms.includes(p))) {
throw new ForbiddenError(`Missing permission for ${tool}`)
}
return runTool(tool)
}
Claude Code's Permission Model
Anthropic's Claude Code implements a tiered permission system worth emulating:
Tier | Approval | Example |
|---|---|---|
Read-only | None required | Glob, Grep, Read files |
Bash commands | Per-command |
|
File modifications | Session-wide | Edit, Write tools |
Dangerous operations | Always ask |
|
This progressive trust model starts restrictive and expands based on context.
Delegating to Identity Providers
The MCP specification supports delegating authentication to trusted identity providers. This is often the better choice for production deployments.
Why Delegate?
Building your own authorization server means:
Token storage infrastructure
Security audit obligations
Third-party token validation
Credential lifecycle management
Delegating to Auth0, Keycloak, or Okta offloads these responsibilities.
Keycloak Integration
InfraCloud demonstrates using Keycloak for MCP authentication:
Feature | Benefit |
|---|---|
Self-hostable | Full control over identity data |
OIDC support | Standards-compliant integration |
Custom scopes | Map to MCP permissions |
Built-in RBAC | No custom implementation needed |
Session management | Handle token refresh automatically |
Architecture Pattern
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β AI Client ββββββΆβ MCP Server ββββββΆβ Keycloak β
β (Claude) β β (Resource) β β (Auth) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β
βΌ
βββββββββββββββ
β Backend β
β Services β
βββββββββββββββ
Your MCP server becomes a pure resource server, validating tokens issued by the identity provider.
Enterprise Considerations
For enterprise deployments, additional patterns apply.
Centralized Authorization Server
Rather than each MCP server running its own auth, use a centralized AS:
Single sign-on across all MCP services
Unified access policies
Centralized audit logging
Credential lifecycle management
As AWS notes, this lets organizations "integrate existing SSO infrastructure" while maintaining policy consistency.
Future: Autonomous Agent Authentication
The current OAuth model assumes human-in-the-loop consent. Future MCP specifications will address:
JWT-based assertions (RFC 7523): Replacing client secrets for workload-to-workload interactions
SPIFFE workload identity: Standard workload identification for service mesh environments
Multi-hop delegation: Preserving "on-behalf-of" relationships across agent chains
Security Checklist
Before deploying your MCP server to production:
Authentication
[ ] OAuth 2.1 compliant implementation
[ ] PKCE required for all clients (S256 method)
[ ] RFC 8707 resource parameter validation
[ ] Short-lived access tokens (< 1 hour)
[ ] Refresh token rotation enabled
Authorization
[ ] Tool-level permission checks
[ ] RBAC or ABAC implementation
[ ] Scope validation on every request
[ ] Deny by default policy
Transport Security
[ ] HTTPS only (no HTTP endpoints)
[ ] Redirect URIs restricted to localhost or HTTPS
[ ] TLS 1.2+ required
[ ] Certificate validation enabled
Operational Security
[ ] All tool invocations logged
[ ] Audit trail for permission changes
[ ] Rate limiting implemented
[ ] Tool definition change alerts
[ ] Regular security reviews
Conclusion
Securing MCP servers requires moving beyond simple API key authentication. The OAuth 2.1 foundationβwith mandatory PKCE and resource indicatorsβprovides strong baseline security. But production deployments need additional layers: RBAC for fine-grained access control, tool-level permission checks, and comprehensive audit logging.
For most teams, delegating authentication to established identity providers (Keycloak, Auth0) is the pragmatic choice. Focus your engineering effort on access control and business logic rather than reinventing OAuth infrastructure.
Need a secure GSC MCP server without the implementation burden? Ekamoira's hosted GSC MCP implements all these security patterns out of the boxβOAuth 2.1 with PKCE, RFC 8707 resource binding, and role-based access controlβso you can focus on insights, not infrastructure.
Frequently Asked Questions
Is PKCE required for all MCP clients?
Yes. The MCP specification mandates PKCE for all clients, with the S256 code challenge method required when technically capable. This protects against authorization code interception attacks, which is especially important for public clients like Claude Desktop and Cursor that can't securely store client secrets.
What's the difference between authentication and access control in MCP?
Authentication verifies who is making the request (via OAuth tokens). Access control determines what they can do (via RBAC/permissions). Many MCP security issues arise from treating authentication as sufficientβa valid token doesn't mean the user should access every tool.
How do resource indicators (RFC 8707) prevent token theft?
Resource indicators bind tokens to specific MCP servers. When requesting authorization, the client specifies the target server URL. The issued token is only valid for that server. If an attacker steals a token, they can't use it against different MCP serversβthe audience won't match.
Should I build my own authorization server or use an identity provider?
For most teams, delegating to an established identity provider (Auth0, Keycloak, Okta) is the better choice. Building your own means maintaining token storage, handling security audits, managing credential lifecycles, and staying current with OAuth specifications. Identity providers handle this complexity.
What are the most common MCP security mistakes?
The top mistakes are: using long-lived static tokens instead of OAuth, missing PKCE implementation, no tool-level access control (authentication without authorization), storing tokens in plain text, and not validating the resource/audience claim on incoming tokens.
Sources
Securing MCP Servers - InfraCloud, July 2025
Open Protocols for Agent Interoperability: Authentication on MCP - AWS, June 2025
MCP Authorization Specification - Model Context Protocol
About the Author

Founder of Ekamoira. Helping brands achieve visibility in AI-powered search through data-driven content strategies.
Ready to Get Cited in AI?
Discover what AI engines cite for your keywords and create content that gets you mentioned.
Try Ekamoira FreeRelated Articles

AI Visibility Checker Chrome Extension: Track Your Brand in ChatGPT, Perplexity & Google AI
AI visibility tracking has become essential as 58% of consumers now use AI for buying decisions.
Soumyadeep Mukherjee
Semrush Alternatives for Small Businesses - A $50/Month Stack That Works
Semrush now costs $199/month minimum (up from $139.95) as of December 2025 > - Free tools (GSC, GA4, Keyword Planner, Screaming Frog) cover 80% of the needs
Christian Gaugeler
What Semrush Doesn't Track: Your AI Visibility Blind Spots
Semrush's AI Visibility Toolkit monitors brand mentions across ChatGPT and Perplexity, but critical gaps remain. With 58.5% of searches ending without clicks, understanding what traditional tools miss has never been more important.
Soumyadeep Mukherjee