Mar 2, 2026
What We Learned Building Tuner's MCP Server: 5 Issues We Faced and How We Solved Them

Mohamed Mamdouh
CTO & Co-founder
We wanted onboarding at Tuner to be effortless. No long setup flows, no clicking through configs, just connect and go.
But we also wanted something smarter: if a customer is already using Claude Code or Cursor, we wanted to pull context directly from their codebase. That way, Tuner can understand what their AI agent actually does and suggest the right evaluation setup from the start, instead of asking them to configure everything from scratch.
The solution? We built our own MCP server. Here's everything we learned doing it.
Promise: Read this in 10 minutes and you'll have a clear picture of what to expect from design decisions to the production bugs nobody warns you about.
Authentication
The first thing we had to settle was how to authenticate users to our server. The two most common approaches for MCP are OAuth 2 and API Keys — but which one to choose depends heavily on which platforms we wanted to support.
Which Clients Were We Targeting?
Deciding our target clients early shaped almost every design decision that followed. MCP clients generally fall into three categories:
Chat Apps: ChatGPT, Claude, and similar consumer-facing tools
IDEs: VS Code, Cursor, and other developer environments
Custom MCP clients: purpose-built by engineering teams
At Tuner, we decided to target both Chat Apps and IDEs. Chat apps have enormous reach with end users. IDEs are where engineers live. Since Tuner serves both, we needed to support both.
Authentication, Revisited
Once we knew our target clients, the authentication choice became clearer:
OAuth 2: Designed for browser-based token exchange. Tokens expire and need refreshing. Supported by Chat Apps but not always ideal for IDEs.
API Keys: Long-lived secrets that only expire when rotated. Better suited to IDE clients that may not support OAuth at all.
Our Decision
Implement both — for the widest compatibility across client types.
Frameworks to Speed Up Development
We didn't build everything from scratch. Two solid frameworks we evaluated:
We went with FastMCP because it integrates cleanly with our existing FastAPI backend, meaning the same business logic gets exposed both as a REST API and through the MCP server. No duplication, one codebase.
Issues We Faced and How We Fixed Them
This is the part nobody documents well. Here's what actually happened":
AI Code Assistance Getting the SDK Version Wrong
We used an AI assistant throughout the implementation but ran into a frustrating problem. FastMCP had multiple active versions, version 3 was in alpha and not backward compatible, and our AI assistant kept mixing them up, generating code that silently failed.
✓ What We Did
We pinned to a specific version and fed our AI the exact documentation for that version. We went with FastMCP v3 for its support of Background Tasks, Prompts as Tools, and Resources as Tools. The lesson: always specify the exact SDK version when working with an AI assistant — don't let it guess.
ChatGPT OAuth Failing on Missing Scopes
During integration testing, OAuth with ChatGPT kept failing. The root cause: ChatGPT's MCP OAuth implementation wasn't sending the openid and profile scopes required by some identity providers.
✓ Solution
The fix is documented in detail in the OpenAI community thread on missing openid scope.
ChatGPT Rejecting Tokens From a Different Issuer Domain
We use a third-party authentication gateway, which means our OAuth token's issuer domain differs from our server's domain. ChatGPT treats this as an invalid token and drops the auth flow entirely.
✓ What We Did (Hacky, But Working)
We built a middleware layer that intercepts the authentication request and swaps the issuer URI with Tuner's own domain before it reaches ChatGPT.
⚠️ This workaround only applies to ChatGPT. Claude handles this correctly — it accepts tokens from a different issuer as long as the token checksum is valid. We made sure not to apply this middleware to Claude connections.
Container Orchestrator Routing Breaking Client Connections
The server worked perfectly standalone. The moment we deployed to AWS ECS, clients couldn't connect. The culprit: ECS has built-in routing environment variables that interfere with MCP's own routing.
✓ What We Did
We defined a custom httpx client in our MCP server configuration and told it to ignore environment-level proxy variables:
IDE Clients Connect Fine, But Every Tool Call Fails Auth
In certain environments, after an IDE client establishes a session via streamable-http, subsequent calls use a mcp-session-id to reach the ASGI layer — but the auth token doesn't travel with them. The ASGI receives unauthenticated requests and rejects them.
✓ What We Did
We added a middleware before the ASGI layer that extracts the validated token from the MCP context and injects it as an Authorization header:
Conclusion
Building our remote MCP server unlocked serious capabilities for Tuner — and the core implementation was relatively straightforward. What slowed us down were the edge cases: OAuth quirks across different clients, container networking surprises, and token plumbing through async boundaries.
Our advice: define your scope clearly, pick your target clients early, and expect to debug at least one auth issue per client type. It's absolutely worth the investment. I will write a part two for this blog for prompt tuning and best practice to maximise the end user experience using your MCP server. Stay tuned.
