charlietools
5 min read

I wrote an OAuth server before I wrote this post

Setting up a personal MCP server. The first tool was easy. The auth was not.

I wanted Claude Code on every machine I work from to share one set of tools, and I wanted those tools available from my phone too. Claude Code skills live on disk, so every laptop I use needs its own copy. claude.ai skills live in your account, uploaded separately. Two copies of the same logic, drifting from the moment I edit one. Felt wrong the moment I typed it.

The right answer is a small server that speaks the Model Context Protocol. Both clients connect to the same URL, both call the same tools, no copying.

I picked the smallest plausible stack: TypeScript with the official MCP SDK, Express, Docker, deployed as a single replica on the Swarm cluster I already had running. The first version exposed a commit_blog_draft tool that creates a draft/<slug> branch in the blog’s source repo and writes an MDX file. End to end, maybe an hour of writing code.

The auth, though.

What claude.ai’s connector form actually wants

claude.ai has a UI for adding custom MCP connectors. The form has four fields: name, URL, optional OAuth Client ID, optional OAuth Client Secret. I read “optional” and assumed I could skip OAuth, ship a static bearer token, and be done.

I was wrong about “optional.” claude.ai requires OAuth 2.0 with Dynamic Client Registration (RFC 7591), every time. The “optional” fields are there for the case where the server has a pre-registered application and the user pastes its credentials. If both are empty, claude.ai DCRs against the server. If the server does not support DCR, the form rejects you with a vague “couldn’t reach the MCP server” message.

My homelab’s identity provider, Authelia, is great. But it does not speak DCR. Clients have to be registered ahead of time in a YAML file. claude.ai is not going to commit to my homelab repo.

Two real choices:

  • Front the MCP server with a Cloudflare Worker that does the OAuth dance and forwards traffic. Possible. Adds a runtime I do not otherwise have.
  • Build a small OAuth authorization server into the MCP server itself. More code in one place, no new infrastructure.

I picked the second one. node-oidc-provider is a battle-tested OIDC implementation with DCR support already wired up. Hand it a JWKS, hand it cookie keys, hand it a “find me a user by ID” callback, and it does the rest.

Until you try to run it.

Five things that broke before it worked

  1. DCR returned 400 because I had allowed too few grant types. claude.ai’s registration payload asks for grant_types: ["authorization_code", "refresh_token"]. My provider only advertised authorization_code. Refresh tokens require enabling the offline_access scope, which I had skipped on the (incorrect) assumption that I would never need refresh. One config line.

  2. Login worked, consent crashed. After the password form, the provider tried to ask the user to consent to the requested scopes. Single-user server, no point asking. I had wired up loadExistingGrant to auto-grant, but only for a hardcoded scope list that did not include offline_access. The flow hit my unimplemented consent handler and returned a 501 with the message “Unsupported prompt: consent.” Visible to the user. Embarrassing.

  3. Token exchange returned 200. Every MCP request after it returned 401. I was validating access tokens with provider.AccessToken.find(token), which the docs do say is the right call. It is, for opaque tokens. I had configured access tokens as JWTs so they could be validated without a database round-trip, and find() searches by JTI, not by the JWT string. Switched to jose.jwtVerify against the JWKS and it lit up.

  4. claude.ai sends MCP traffic to /, not the URL the user pastes. After the OAuth dance completes, claude.ai derives the MCP endpoint from the OAuth issuer. The issuer was https://mcp.example/, so claude.ai started POSTing to https://mcp.example/, ignoring the /mcp path on the connector. Mounted the same handlers at both. Belt and suspenders.

  5. Every redeploy nuked claude.ai’s session. node-oidc-provider ships with an in-memory adapter as a “quick start.” It logs a warning on boot. I read the warning, decided I would deal with it later, and was promptly punished with a “delete and re-add the connector” tax on every code change. Plugged the provider into ioredis against the Redis instance I was already running for other things, and the tax went away.

Each one took maybe twenty minutes to find, two minutes to fix. Cumulative time looking at OAuth specs and oidc-provider source code: most of an afternoon.

What’s on the server now

Two tools:

  • get_voice_guide returns the editorial rules for this blog.
  • commit_blog_draft takes a slug, title, description, tags, and a markdown body, then opens a draft branch on GitHub. This post was committed by it.

Plus one Claude Code specific prompt that embeds the voice guide and the writing instructions inline, so the slash command works without an extra tool round-trip. claude.ai does not surface MCP prompts in the UI, so the get_voice_guide tool is the substitute there.

Next up: a homelab journal tool, a snippet vault for the hundred and seventeen variations of Connect-MgGraph I have written in the last year, maybe a query tool for the rack inventory so I can ask “what’s plugged into op2 port 5” without being at a desk.

The pattern I keep coming back to: anything I want to do from my phone the same way I do it from a terminal becomes a tool on this server. The OAuth is paid for. The next addition is a registerTool call away.

What I would skip if I did it again

Nothing, actually. The first four failures all surfaced clear error responses; reading the wire bought every fix. The fifth one was avoidable if I had taken the boot warning seriously instead of writing it off as a “for serious deployments” disclaimer.

That is also a defensible default. Plenty of warnings really are for serious deployments only. This one was not.

Comments & reactions