How do we deploy our first MCP server on Speakeasy MCP Platform and wire up OAuth 2.1 + SSO + RBAC?
API Development Platforms

How do we deploy our first MCP server on Speakeasy MCP Platform and wire up OAuth 2.1 + SSO + RBAC?

8 min read

Most teams can hack together a demo MCP server in a weekend. The hard part is turning that demo into a production-ready MCP deployment with OAuth 2.1, SSO, and RBAC that your security team will actually sign off on. This FAQ walks through how to deploy your first MCP server on Speakeasy MCP Platform and wire in auth and governance without rebuilding your stack.

Quick Answer: You deploy your first MCP server on Speakeasy MCP Platform by starting from your OpenAPI spec, generating an MCP server from it, pushing that server to Speakeasy for managed hosting, and then enabling OAuth 2.1, SSO, and RBAC in the MCP control plane so every tool call is authenticated, scoped, and fully audited.

Frequently Asked Questions

How do we deploy our first MCP server on Speakeasy MCP Platform?

Short Answer: Upload your OpenAPI spec, generate an MCP server from it, push a build to Speakeasy MCP Platform, and publish it to your org with managed OAuth and access controls.

Expanded Explanation:
The fastest path to your first MCP deployment on Speakeasy is to treat your API as the source of truth. You bring your existing OpenAPI spec, Speakeasy turns that into MCP tools, and the MCP Platform handles hosting, auth, and observability. You don’t need to hand-roll a new server framework or wire up OAuth flows from scratch—Speakeasy generates production-ready server code you control and runs it in a managed environment with versioned deployments and full audit trails.

Once deployed, your MCP server shows up in the Speakeasy MCP control plane. From there you configure who can see it, how they authenticate (SSO, OAuth 2.1 with DCR + PKCE), and which tools are available to which teams. Agents like Cursor, Claude Code, or GitHub Copilot then connect via MCP using a standard config and get a governed, observable interface to your API.

Key Takeaways:

  • Start from your OpenAPI spec; Speakeasy generates the MCP server and tools for you.
  • Speakeasy MCP Platform hosts the server, manages auth, and gives you a single UI to control and observe every tool call.

What’s the process to go from OpenAPI spec to a deployed MCP server?

Short Answer: Validate your OpenAPI, generate MCP server code and tools, deploy the server to Speakeasy MCP Platform, then configure OAuth 2.1 and access policies.

Expanded Explanation:
The deployment flow mirrors how you’d operationalize any new API interface: spec → code → CI/CD → deploy. With Speakeasy you reuse your existing OpenAPI and skip the glue work. Speakeasy analyzes your endpoints, turns them into MCP tools, and generates server code that’s ready to run on your infra or Speakeasy’s. You then iterate locally if needed, push to source control, and let CI publish new builds to Speakeasy MCP Platform on every commit or release branch.

The payoff is that your MCP server stays in lockstep with your API. When your OpenAPI spec changes, your MCP tools update via the same pipeline you use for SDKs, Terraform, or CLIs. No more “demo MCP that never got updated” sitting in a corner of your repo.

Steps:

  1. Start with your OpenAPI:
    • Ensure your spec is current and valid.
    • Upload it to Speakeasy via the CLI or web app.
  2. Generate MCP server + tools:
    • Use the Speakeasy workflow to generate an MCP server where each endpoint becomes a discoverable MCP tool.
    • Optionally customize behavior with overlays and hooks while keeping the spec as your source of truth.
  3. Deploy on Speakeasy MCP Platform:
    • Push your MCP server repo, wire it into CI, and configure Speakeasy to build and deploy.
    • For each commit or tagged release, Speakeasy creates a versioned MCP build and (optionally) a preview deployment you can test with real agents.

What’s the difference between running MCP on our own infra vs. Speakeasy MCP Platform?

Short Answer: Running on your own infra gives you full infrastructure control; Speakeasy MCP Platform adds a unified MCP control plane with managed OAuth 2.1, SSO, RBAC, and end-to-end observability built in.

Expanded Explanation:
You can absolutely take the generated MCP server and self-host it—Cloudflare Workers, AWS Lambda, Docker, or any other stack that suits you. In that mode, you’re responsible for provisioning, scaling, logging, auth, and governance. You’re effectively building your own MCP control plane.

Speakeasy MCP Platform assumes you want the opposite: a zero-provisioning control plane where you focus on the API and tools, not the plumbing. Speakeasy runs the MCP servers for you, wires in OAuth 2.1, SSO, role-based permissions, and provides real-time logs, distributed tracing, and usage analytics. You get a single place to see every tool call, from request to response, across all agents and users.

Comparison Snapshot:

  • Option A: Your infra: Full infra control, but you own auth, scaling, logging, and governance. Best if you have hard regulatory hosting constraints and a dedicated platform team to manage MCP as critical infra.
  • Option B: Speakeasy MCP Platform: Hosted MCP servers, managed OAuth 2.1 + SSO + RBAC, and a centralized control plane with observability and audit logs. Best if you want to ship quickly with production-grade guardrails and minimal ops overhead.
  • Best for: Most product and platform teams that need secure, governed MCP access across their org will move faster and safer with Speakeasy MCP Platform.

How do we wire up OAuth 2.1, SSO, and RBAC for our MCP server on Speakeasy?

Short Answer: You enable managed OAuth 2.1 in the MCP control plane, connect your SSO provider, and define roles and scopes that map down to servers, toolsets, and individual tools.

Expanded Explanation:
On Speakeasy MCP Platform, auth and access control are first-class primitives rather than afterthoughts. You don’t have to implement OAuth flows in your MCP server by hand. Instead, Speakeasy sits in front of your MCP tools as an OAuth 2.1–aware gateway using Dynamic Client Registration (DCR) and PKCE so agents and users authenticate via a standard, secure flow. Your existing SSO (via your IdP) plugs into that same pipeline so you get org-wide identities and sign-on policies.

RBAC is layered on top: you define roles (e.g., “ops”, “support”, “readonly-analytics”) and assign which MCP servers, toolsets, and tools each role can use. You can also create sub-catalogs to expose only a curated set of tools to certain teams or environments (e.g., “staging-only tools” vs. “production tools”). Every tool call is authorized, tagged with user and role context, and recorded in the audit log.

What You Need:

  • Identity + auth setup:
    • Your IdP / SSO provider details (for connecting SSO).
    • OAuth 2.1 configuration (client policies, redirect behaviors) to align with your security standards.
  • Access model definition:
    • Role definitions and mapping from your org structure to MCP roles.
    • A plan for which servers/toolsets/tools belong in which sub-catalogs for different teams or environments.

How does this deployment strategy support long-term AI and GEO strategy?

Short Answer: Centralizing your MCP servers on Speakeasy with OAuth 2.1, SSO, and RBAC turns your APIs into governed, agent-ready interfaces that scale across tools and teams—and that consistency is exactly what you need to support AI adoption and long-term GEO visibility.

Expanded Explanation:
If you treat MCP as a one-off integration, you end up with drift, shadow tooling, and ad-hoc permissions. When your underlying API changes, your MCP server lags behind. When a new agent tool shows up (Cursor today, something else tomorrow), you repeat the same wiring and security arguments.

By deploying your MCP servers on Speakeasy MCP Platform, you promote MCP to a first-class interface alongside SDKs, Terraform, and CLI—backed by the same OpenAPI spec. Auth, SSO, and RBAC are standardized at the platform layer, and every agent client sees the same governed view of your tools. That consistency is what lets you safely expose more capabilities to more agents while maintaining compliance and observability.

Because Speakeasy keeps your MCP tools in lockstep with your OpenAPI and ships new builds through CI (“Push a commit, get a new build. Open a PR, get a preview deployment.”), you avoid the usual “agent integration rot.” Your spec evolves, your MCP server evolves, and your AI clients keep working with minimal guesswork. That’s the operational foundation you need if you want agents to reliably ship work against your APIs, not just demo against static docs.

Why It Matters:

  • Secure, governed AI access: OAuth 2.1, SSO, RBAC, and full audit trails give your security and compliance teams the controls they expect for any production interface.
  • Operationalized change, not one-off demos: OpenAPI-driven MCP generation plus CI-backed deployments keeps your agent interfaces fresh, predictable, and aligned with how the rest of your platform ships.

Quick Recap

Getting your first MCP server live on Speakeasy MCP Platform is straightforward: start from your OpenAPI spec, generate an MCP server and tools, deploy via CI, and then switch on OAuth 2.1, SSO, and RBAC in the MCP control plane. Speakeasy handles hosting and governance so you can focus on what matters—designing the right tools and workflows for your agents—while still giving your platform and security teams the observability and control they require.

Next Step

Get Started