SPINE provides an optional integration with the Adaptive MCP Orchestrator Blueprint project for intelligent tool routing with Claude-first dispatch.
The MCPOrchestratorExecutor allows SPINE to delegate task execution to an external MCP Orchestrator service, which provides:
Key principle: SPINE continues to work normally if MCP Orchestrator is unavailable. The integration uses graceful degradation—falling back to SubagentExecutor automatically.
The Adaptive MCP Orchestrator Blueprint is a standalone platform consisting of three integrated parts:
┌─────────────────────────────────────────────────────────────────────────┐
│ Adaptive MCP Orchestrator Blueprint │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────────┐ ┌──────────────────────┐ ┌────────────────┐ │
│ │ MCP Orchestrator │ │ AI Assistant │ │ MCP │ │
│ │ (Core Platform) │ │ Integration │ │ Meta-Router │ │
│ │ │ │ │ │ │ │
│ │ • M1: Core Engine │ │ • Gemini Adapter │ │ • MCP Client │ │
│ │ • M2: Config │ │ • Assistant Bridge │ │ • Discovery │ │
│ │ • M3: Observability │ │ • Provider Monitor │ │ • Registry │ │
│ │ • M4: Dashboard │ │ • Multi-provider │ │ • Router │ │
│ │ • M5: Learning │ │ fallback chain │ │ • Config │ │
│ │ • M6: Infrastructure│ │ │ │ │ │
│ └──────────────────────┘ └──────────────────────┘ └────────────────┘ │
│ │
│ 951 tests across all modules │
│ │
└─────────────────────────────────────────────────────────────────────────┘
| Part | Purpose | Components |
|---|---|---|
| MCP Orchestrator | Core cognitive dispatcher | Decision engine, invocation engine, config, logging, dashboard, learning layer |
| AI Assistant Integration | Multi-provider AI access | Claude (primary), GPT, Gemini adapters with automatic fallback chain |
| MCP Meta-Router | Tool discovery and routing | MCP client, service discovery, registry, intelligent routing |
When the MCP Orchestrator receives a task:
From SPINE’s perspective, the Adaptive MCP Orchestrator is a black box:
┌─────────────────────────────────────┐
│ SPINE │
│ │
│ Sends: │
│ • Task description │
│ • Required capabilities │
│ • Context (project, role) │
│ │
│ Receives: │
│ • Result (success/failure) │
│ • Output content │
│ • Metadata (provider, latency) │
│ │
└──────────────────┬──────────────────┘
│
│ POST /execute
│ (single API call)
▼
┌─────────────────────────────────────┐
│ Adaptive MCP Orchestrator │
│ ═══════════════════════════ │
│ │
│ ┌─────────────────────────────┐ │
│ │ BLACK BOX MAGIC │ │
│ │ │ │
│ │ • Which provider? (Claude) │ │
│ │ • Which tools? (MCP) │ │
│ │ • Learn from outcome │ │
│ │ • Handle failures │ │
│ │ • Log everything │ │
│ │ │ │
│ └─────────────────────────────┘ │
│ │
│ SPINE doesn't need to know HOW │
│ │
└─────────────────────────────────────┘
| SPINE Sees | What Actually Happens Inside |
|---|---|
POST /execute with task |
Decision engine analyzes capabilities |
| Waits for response… | Learning layer checks historical scores |
| AI Assistant Integration picks Claude (1.5x bias) | |
| If Claude fails → automatic GPT fallback | |
| If GPT fails → automatic Gemini fallback | |
| MCP Meta-Router discovers required tools | |
| Tools are invoked with proper context | |
| Results are aggregated and scored | |
| Learning layer records outcome for future | |
Gets {"status": "success", ...} |
All complexity hidden |
┌─────────────────────────────────────────────────────────────┐
│ SPINE │
│ │
│ AgenticLoop │
│ ├── TaskQueue │
│ ├── Evaluators (Build/Test/LLM) │
│ └── Executor Selection: │
│ │ │
│ ├─▶ MCPOrchestratorExecutor (if available) │
│ │ │ │
│ │ │ health_check() ──▶ Success? ──▶ Use it │
│ │ │ │
│ │ └──────────────────▶ Failed? ──┐ │
│ │ │ │
│ └─▶ SubagentExecutor (fallback) ◀───────┘ │
│ │
└──────────────────────────┬──────────────────────────────────┘
│
│ HTTP (only if MCP Orchestrator running)
│ http://localhost:8080
▼
┌─────────────────────────────────────────────────────────────┐
│ MCP Orchestrator Blueprint │
│ │
│ Core Orchestrator │
│ ├── Decision Engine (tool selection) │
│ ├── Invocation Engine (tool calling) │
│ └── Claude-first bias (1.5x weight) │
│ │
│ Supporting Modules: │
│ ├── Config Engine │
│ ├── Logging & Observability │
│ ├── Dashboard (API endpoints) │
│ └── Learning Layer (score boosts) │
│ │
└─────────────────────────────────────────────────────────────┘
| Responsibility | SPINE | MCP Orchestrator |
|---|---|---|
| WHEN to execute | AgenticLoop decides | - |
| HOW to execute | - | Intelligent routing |
| Workflow orchestration | Yes | - |
| Task queue management | Yes | - |
| Oscillation detection | Yes | - |
| Build/Test verification | Yes | - |
| Tool selection | - | Yes |
| Provider fallback | - | Yes |
| Learning from outcomes | - | Yes |
Summary: SPINE decides WHEN, MCP Orchestrator decides HOW.
from spine.orchestrator.executors.mcp_orchestrator import (
MCPOrchestratorExecutor,
MCPOrchestratorConfig,
create_mcp_executor,
)
# Simple creation with defaults
executor = create_mcp_executor(
base_url="http://localhost:8080",
capabilities=["code_generation", "python"],
)
# Check availability
if executor.is_available():
result = executor.execute(task, project_path, role="implementer")
else:
print("MCP Orchestrator not available, fallback will be used")
config = MCPOrchestratorConfig(
base_url="http://localhost:8080",
timeout_seconds=60,
default_capabilities=["code_generation", "python"],
fallback_enabled=True, # Enable SubagentExecutor fallback
)
executor = MCPOrchestratorExecutor(config)
result = executor.execute(task, project_path, role="architect")
# Use MCP Orchestrator executor
python -m spine.orchestrator run --project /path \
--executor mcp-orchestrator \
--executor-url http://localhost:8080
# With explicit fallback disable (fail if unavailable)
python -m spine.orchestrator run --project /path \
--executor mcp-orchestrator \
--no-fallback
The integration is designed to never break SPINE if MCP Orchestrator is unavailable:
# Pseudocode of fallback logic
executor = MCPOrchestratorExecutor(config)
if executor.is_available():
# MCP Orchestrator is running - use it
result = executor.execute(task, project_path, role)
else:
# Automatic fallback to SubagentExecutor
logger.warning("MCP Orchestrator unavailable, using SubagentExecutor")
result = fallback_executor.execute(task, project_path, role)
| Scenario | Behavior |
|---|---|
| MCP Orchestrator not running | Automatic fallback to SubagentExecutor |
| Network timeout | Automatic fallback with warning logged |
| HTTP error (4xx/5xx) | Automatic fallback with error logged |
| httpx not installed | Import error caught, fallback used |
| Variable | Default | Description |
|---|---|---|
MCP_ORCHESTRATOR_URL |
http://localhost:8080 |
Base URL for MCP Orchestrator |
MCP_ORCHESTRATOR_TIMEOUT |
60 |
Request timeout in seconds |
MCP_ORCHESTRATOR_FALLBACK |
true |
Enable automatic fallback |
MCP_ORCHESTRATOR_API_KEY |
- | Optional API key for authentication |
# Load config from environment variables
config = MCPOrchestratorConfig.from_env()
executor = MCPOrchestratorExecutor(config)
The executor communicates with MCP Orchestrator via these endpoints:
GET /health/ready
Returns 200 if MCP Orchestrator is ready to accept requests.
POST /execute
Content-Type: application/json
{
"task": "Generate a Python function to calculate fibonacci",
"capabilities": ["code_generation", "python"],
"context": {
"project_path": "/path/to/project",
"task_id": "task-001",
"role": "implementer"
},
"timeout_ms": 60000
}
Response:
{
"request_id": "uuid-here",
"status": "success",
"result": "def fibonacci(n):\n ...",
"tool_used": "claude_code_generation",
"provider": "anthropic",
"latency_ms": 1234,
"tokens": {
"input": 150,
"output": 200,
"total": 350
}
}
The executor maps SPINE roles to MCP Orchestrator capabilities:
| SPINE Role | Capabilities |
|---|---|
architect |
system_design, architecture, planning |
implementer |
code_generation, python, implementation |
reviewer |
code_review, analysis, quality |
researcher |
research, analysis, synthesis |
When using MCP Orchestrator, the result includes additional metadata:
result = executor.execute(task, project_path, role)
# Metadata when MCP Orchestrator is used
result.metadata = {
"executor": "mcp_orchestrator",
"request_id": "uuid-here",
"tool_used": "claude_code_generation",
"provider": "anthropic",
"latency_ms": 1234,
}
# Metadata when fallback is used
result.metadata = {
"executor": "mcp_orchestrator_fallback",
"fallback_reason": "mcp_orchestrator_unavailable",
}
cd "/path/to/Adaptive MCP Orchestrator Blueprint"
cd projects/p6-infrastructure
docker-compose up -d
Verify:
curl http://localhost:8080/health
# Should return: {"status": "healthy", ...}
The executor requires httpx for HTTP communication:
pip install httpx
This is already included in SPINE’s requirements.txt as an optional dependency.
Symptom: Warning logged, fallback to SubagentExecutor
Check:
# Is it running?
curl http://localhost:8080/health
# Check Docker
docker ps | grep mcp
# Start if needed
docker-compose up -d
Symptom: MCP Orchestrator timeout after 60s
Solutions:
timeout_seconds=120Symptom: ImportError: httpx not found
Solution:
pip install httpx
If you want to disable MCP Orchestrator integration:
# Option 1: Disable fallback (fail if unavailable)
config = MCPOrchestratorConfig(fallback_enabled=False)
# Option 2: Use SubagentExecutor directly
from spine.orchestrator.executors import SubagentExecutor
executor = SubagentExecutor(config)
| ← Back to Docs | Executors → |