MCP Integration
How spm works with AI clients via the Model Context Protocol.
What is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. spm uses MCP to expose skills to any compatible AI client.
Architecture
AI Client (Claude, Cursor, VS Code, Windsurf, JetBrains, ...)
│
│ MCP (stdio)
│
▼
spm serve
│
├── skill_list → Returns compact skill index
├── skill_load → Loads full skill into context
├── skill_context → Current session state
├── skill_search → Search local/remote skills
├── skill_feedback → Record usage results
└── skill_install → Install from registry
The AI model communicates with spm through 6 MCP tools.
MCP Tools
skill_list
Returns a compact index of all installed skills.
{
"skills": [
{
"name": "docx",
"version": "1.2.0",
"trigger": "Use when creating Word documents",
"tags": ["docx", "word"],
"priority": 50,
"tokens_estimate": 3500
}
]
}The model uses this index to decide which skill to load for a given task.
skill_load
Loads a specific skill's instructions into the model's context.
Parameters:
name— skill name (e.g.,docx)
Returns the full SKILL.md content along with metadata (permissions, token count, confidence, works_with relationships).
skill_context
Returns the current session state:
{
"loaded": [
{ "name": "docx", "version": "1.2.0", "tokens": 3500 }
],
"tokens_used": 3500,
"tokens_available": 6500,
"total_skills": 12
}skill_search
Search for skills by keyword, tag, or file pattern.
Parameters:
query— search termscope—local,remote, orall
{
"local": [
{ "name": "docx", "score": 10, "confidence": 0.92 }
],
"remote": [
{ "name": "docx-pro", "author": "someauthor", "registry": "main", "installs": 1200 }
]
}skill_feedback
Records feedback after using a skill.
Parameters:
name— skill nameresult—success,partial,failure, orfalse_triggercomment— optional description
This feeds into the confidence scoring system.
skill_install
Installs a skill from a remote registry. Requires user approval before proceeding.
Supported clients
spm supports 14 AI clients out of the box. One command to connect:
spm connect <client>| Client | ID | Aliases | Config format |
|---|---|---|---|
| Claude Desktop | claude | — | mcpServers |
| Claude Code | claude-code | — | mcpServers |
| Cursor | cursor | — | mcpServers |
| VS Code (Copilot) | vscode | code | mcp.servers |
| Windsurf | windsurf | — | mcpServers |
| JetBrains IDEs | jetbrains | jb | mcpServers |
| Zed | zed | — | context_servers |
| Cline | cline | — | mcpServers |
| Roo Code | roo-code | roo | mcpServers |
| Continue | continue | — | mcpServers |
| Amazon Q | amazonq | — | mcpServers |
| Gemini CLI | gemini | — | mcpServers |
| OpenCode | opencode | — | mcp |
| OpenClaw | openclaw | oc | mcpServers |
Claude Desktop
spm connect claudeWrites to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or ~/.config/Claude/claude_desktop_config.json (Linux).
Claude Code
spm connect claude-codeWrites to ~/.claude.json. Available across all projects.
Cursor
spm connect cursorWrites to ~/.cursor/mcp.json. Cursor has a 40-tool limit across all MCP servers — spm uses 9 tools, well within budget.
VS Code (GitHub Copilot)
spm connect vscode
# or
spm connect codeWrites to VS Code's settings.json under the mcp.servers key. Requires GitHub Copilot with agent mode enabled.
Windsurf
spm connect windsurfWrites to ~/.codeium/windsurf/mcp_config.json. Windsurf has a 100-tool limit.
JetBrains IDEs
spm connect jetbrains
# or
spm connect jbWrites to ~/.junie/mcp/mcp.json. Works with IntelliJ IDEA, WebStorm, PyCharm, GoLand, Rider, and all other JetBrains IDEs.
Zed
spm connect zedWrites to ~/.config/zed/settings.json under context_servers.
Cline
spm connect clineWrites to Cline's cline_mcp_settings.json inside VS Code's globalStorage.
Roo Code
spm connect roo-code
# or
spm connect rooWrites to Roo Code's mcp_settings.json inside VS Code's globalStorage.
Continue
spm connect continueWrites to ~/.continue/mcp.json. Works with both VS Code and JetBrains versions.
Amazon Q Developer
spm connect amazonqWrites to ~/.aws/amazonq/mcp.json.
Gemini CLI
spm connect geminiWrites to ~/.gemini/settings.json.
OpenCode
spm connect opencodeWrites to ~/.config/opencode/opencode.json. Uses a combined command array format instead of separate command/args.
OpenClaw
spm connect openclaw
# or
spm connect ocWrites to ~/.openclaw/workspace/config/mcporter.json. OpenClaw also supports full persona deployment — see Deploy Targets for exporting personas as OpenClaw agent workspaces with channel bindings.
Any MCP-compatible client
spm uses stdio transport. Any client that supports MCP over stdio can use spm — just point it at spm serve --stdio.
Disconnecting
spm disconnect <client>Removes the spm entry from the client's config file without touching other settings.
How the model selects skills
- At session start, the MCP server embeds a compact skill index directly into
instructions. The model sees all installed skills immediately — no tool call required. The index uses a compact pipe-delimited format:
name|tokens|trigger description|tags|file_patterns
Example:
frontend-developer|1250t|Any frontend task: React/Vue components...|frontend,react,vue|*.tsx,*.jsx,*.vue
docx|3500t|Use when creating Word documents|docx,word|*.docx
- When a user request comes in, the model evaluates candidates from the embedded index:
- Trigger description match (semantic reasoning against the user's request)
- Tag match (keywords in the request)
- File pattern match (e.g., user mentions
.docxfiles) - Token cost (shown in the index, helps the model decide before loading)
- If a match is found, the model calls
skill_loadbefore starting work - If no local match, the model calls
skill_searchwithscope: 'remote' - For multi-skill tasks, the model uses
works_withto load complementary skills
The skill_list tool remains available for programmatic access to the full index (including versions and priorities) but is no longer required for discovery.
Confidence scoring
Skills have a confidence score (0.0–1.0) computed from usage feedback:
- Automatic: The model calls
skill_feedbackafter each use - Explicit: Users run
spm rate <name> --score 1-5
If confidence is low (<0.5), the model treats the skill as guidance rather than strict instructions.
View confidence scores:
spm stats