Skillbase / spm

MCP Integration

How spm works with AI clients via the Model Context Protocol.

What is MCP?

The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. spm uses MCP to expose skills to any compatible AI client.

Architecture

AI Client (Claude, Cursor, VS Code, Windsurf, JetBrains, ...)
    │
    │ MCP (stdio)
    │
    ▼
spm serve
    │
    ├── skill_list     → Returns compact skill index
    ├── skill_load     → Loads full skill into context
    ├── skill_context  → Current session state
    ├── skill_search   → Search local/remote skills
    ├── skill_feedback → Record usage results
    └── skill_install  → Install from registry

The AI model communicates with spm through 6 MCP tools.

MCP Tools

skill_list

Returns a compact index of all installed skills.

{
  "skills": [
    {
      "name": "docx",
      "version": "1.2.0",
      "trigger": "Use when creating Word documents",
      "tags": ["docx", "word"],
      "priority": 50,
      "tokens_estimate": 3500
    }
  ]
}

The model uses this index to decide which skill to load for a given task.

skill_load

Loads a specific skill's instructions into the model's context.

Parameters:

  • name — skill name (e.g., docx)

Returns the full SKILL.md content along with metadata (permissions, token count, confidence, works_with relationships).

skill_context

Returns the current session state:

{
  "loaded": [
    { "name": "docx", "version": "1.2.0", "tokens": 3500 }
  ],
  "tokens_used": 3500,
  "tokens_available": 6500,
  "total_skills": 12
}

Search for skills by keyword, tag, or file pattern.

Parameters:

  • query — search term
  • scopelocal, remote, or all
{
  "local": [
    { "name": "docx", "score": 10, "confidence": 0.92 }
  ],
  "remote": [
    { "name": "docx-pro", "author": "someauthor", "registry": "main", "installs": 1200 }
  ]
}

skill_feedback

Records feedback after using a skill.

Parameters:

  • name — skill name
  • resultsuccess, partial, failure, or false_trigger
  • comment — optional description

This feeds into the confidence scoring system.

skill_install

Installs a skill from a remote registry. Requires user approval before proceeding.

Supported clients

spm supports 14 AI clients out of the box. One command to connect:

spm connect <client>
ClientIDAliasesConfig format
Claude DesktopclaudemcpServers
Claude Codeclaude-codemcpServers
CursorcursormcpServers
VS Code (Copilot)vscodecodemcp.servers
WindsurfwindsurfmcpServers
JetBrains IDEsjetbrainsjbmcpServers
Zedzedcontext_servers
ClineclinemcpServers
Roo Coderoo-coderoomcpServers
ContinuecontinuemcpServers
Amazon QamazonqmcpServers
Gemini CLIgeminimcpServers
OpenCodeopencodemcp
OpenClawopenclawocmcpServers

Claude Desktop

spm connect claude

Writes to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or ~/.config/Claude/claude_desktop_config.json (Linux).

Claude Code

spm connect claude-code

Writes to ~/.claude.json. Available across all projects.

Cursor

spm connect cursor

Writes to ~/.cursor/mcp.json. Cursor has a 40-tool limit across all MCP servers — spm uses 9 tools, well within budget.

VS Code (GitHub Copilot)

spm connect vscode
# or
spm connect code

Writes to VS Code's settings.json under the mcp.servers key. Requires GitHub Copilot with agent mode enabled.

Windsurf

spm connect windsurf

Writes to ~/.codeium/windsurf/mcp_config.json. Windsurf has a 100-tool limit.

JetBrains IDEs

spm connect jetbrains
# or
spm connect jb

Writes to ~/.junie/mcp/mcp.json. Works with IntelliJ IDEA, WebStorm, PyCharm, GoLand, Rider, and all other JetBrains IDEs.

Zed

spm connect zed

Writes to ~/.config/zed/settings.json under context_servers.

Cline

spm connect cline

Writes to Cline's cline_mcp_settings.json inside VS Code's globalStorage.

Roo Code

spm connect roo-code
# or
spm connect roo

Writes to Roo Code's mcp_settings.json inside VS Code's globalStorage.

Continue

spm connect continue

Writes to ~/.continue/mcp.json. Works with both VS Code and JetBrains versions.

Amazon Q Developer

spm connect amazonq

Writes to ~/.aws/amazonq/mcp.json.

Gemini CLI

spm connect gemini

Writes to ~/.gemini/settings.json.

OpenCode

spm connect opencode

Writes to ~/.config/opencode/opencode.json. Uses a combined command array format instead of separate command/args.

OpenClaw

spm connect openclaw
# or
spm connect oc

Writes to ~/.openclaw/workspace/config/mcporter.json. OpenClaw also supports full persona deployment — see Deploy Targets for exporting personas as OpenClaw agent workspaces with channel bindings.

Any MCP-compatible client

spm uses stdio transport. Any client that supports MCP over stdio can use spm — just point it at spm serve --stdio.

Disconnecting

spm disconnect <client>

Removes the spm entry from the client's config file without touching other settings.

How the model selects skills

  1. At session start, the MCP server embeds a compact skill index directly into instructions. The model sees all installed skills immediately — no tool call required. The index uses a compact pipe-delimited format:
name|tokens|trigger description|tags|file_patterns

Example:

frontend-developer|1250t|Any frontend task: React/Vue components...|frontend,react,vue|*.tsx,*.jsx,*.vue
docx|3500t|Use when creating Word documents|docx,word|*.docx
  1. When a user request comes in, the model evaluates candidates from the embedded index:
    • Trigger description match (semantic reasoning against the user's request)
    • Tag match (keywords in the request)
    • File pattern match (e.g., user mentions .docx files)
    • Token cost (shown in the index, helps the model decide before loading)
  2. If a match is found, the model calls skill_load before starting work
  3. If no local match, the model calls skill_search with scope: 'remote'
  4. For multi-skill tasks, the model uses works_with to load complementary skills

The skill_list tool remains available for programmatic access to the full index (including versions and priorities) but is no longer required for discovery.

Confidence scoring

Skills have a confidence score (0.0–1.0) computed from usage feedback:

  • Automatic: The model calls skill_feedback after each use
  • Explicit: Users run spm rate <name> --score 1-5

If confidence is low (<0.5), the model treats the skill as guidance rather than strict instructions.

View confidence scores:

spm stats