Executor Framework (v0.3.30)
SPINE’s orchestrator uses a pluggable executor architecture that separates task execution logic from the agentic loop. This allows swapping execution strategies without changing the core orchestration flow.
Architecture
graph TD
subgraph AgenticLoop
TQ[TaskQueue] --> EX[Executor] --> EV[Evaluator]
EX --> SE[Subagent Executor]
EX --> CC[ClaudeCode Executor]
EX --> MO[MCP Orchestrator]
EX --> SL[SmallLLM Executor]
EX --> RT[Router]
end
style AgenticLoop fill:#0f172a,stroke:#2563eb,color:#e2e8f0
style TQ fill:#2563eb,stroke:#1e40af,color:#fff
style EX fill:#7c3aed,stroke:#5b21b6,color:#fff
style EV fill:#0d9488,stroke:#0f766e,color:#fff
style SE fill:#1e293b,stroke:#475569,color:#e2e8f0
style CC fill:#1e293b,stroke:#475569,color:#e2e8f0
style MO fill:#1e293b,stroke:#475569,color:#e2e8f0
style SL fill:#1e293b,stroke:#475569,color:#e2e8f0
style RT fill:#1e293b,stroke:#475569,color:#e2e8f0
Available Executors
SubagentExecutor
Uses .claude/agents/ persona definitions to execute tasks via the LLM client.
from spine.orchestrator.executors import SubagentExecutor, SubagentConfig
config = SubagentConfig(
agent_dir=project_path / ".claude" / "agents",
model_override="claude-opus-4-5-20251101",
use_context_stacks=True,
)
executor = SubagentExecutor(config)
result = executor.execute(task, project_path, role="implementer")
Features:
- Reads agent personas from
.claude/agents/*.md - Supports context stacks from YAML scenarios
- Role-based persona selection (architect, implementer, reviewer)
- Model override capability
CLI Usage:
python -m spine.orchestrator run --project /path \
--executor subagent \
--executor-model claude-opus-4-5-20251101
ClaudeCodeExecutor
Spawns an agent CLI (e.g., Claude Code) as a subprocess for task execution.
from spine.orchestrator.executors import ClaudeCodeExecutor, ClaudeCodeConfig
config = ClaudeCodeConfig(
model="opus",
max_budget_usd=5.0,
skip_permissions=False,
use_context_stacks=True,
)
executor = ClaudeCodeExecutor(config)
result = executor.execute(task, project_path, role="researcher")
Features:
- Runs agent CLI in subprocess (default: Claude Code)
- Budget control via
--max-turns(estimated from USD) - Permission bypass for sandboxed environments
- Context stack integration for prompt building
CLI Usage:
# Basic usage
python -m spine.orchestrator run --project /path \
--executor claude-code \
--executor-budget 10.0
# With permission bypass (sandboxed only)
python -m spine.orchestrator run --project /path \
--executor claude-code \
--executor-skip-permissions
MCPOrchestratorExecutor (v0.3.21+)
Delegates task execution to the MCP Orchestrator Blueprint for intelligent tool routing.
from spine.orchestrator.executors.mcp_orchestrator import (
MCPOrchestratorExecutor,
MCPOrchestratorConfig,
create_mcp_executor,
)
# Simple creation
executor = create_mcp_executor(
base_url="http://localhost:8080",
capabilities=["code_generation", "python"],
fallback_enabled=True,
)
result = executor.execute(task, project_path, role="implementer")
Features:
- Intelligent tool selection based on capabilities
- Configurable provider priority with automatic fallback
- Graceful degradation - falls back to SubagentExecutor if unavailable
- Learning from outcomes (score boosts)
Graceful Fallback:
# If MCP Orchestrator is unavailable, automatic fallback occurs
if not executor.is_available():
# SubagentExecutor is used automatically
result.metadata["executor"] = "mcp_orchestrator_fallback"
CLI Usage:
# Use MCP Orchestrator executor
python -m spine.orchestrator run --project /path \
--executor mcp-orchestrator \
--executor-url http://localhost:8080
# Disable fallback (fail if unavailable)
python -m spine.orchestrator run --project /path \
--executor mcp-orchestrator \
--no-fallback
→ Full MCP Orchestrator Integration Guide
SmallLLMExecutor (v0.3.27+)
Orchestrates 3B-8B quantized language models via MCP self-description layers for cost-optimized, edge-capable task execution.
from spine.orchestrator.executors.small_llm_executor import SmallLLMExecutor, SmallLLMConfig
config = SmallLLMConfig(
model_name="qwen2.5-coder:3b",
provider="ollama", # "ollama" | "anthropic"
base_url="http://localhost:11434",
max_context_tokens=4096,
mcp_servers=["research-agent-mcp", "evaluation-mcp"],
temperature=0.1,
)
executor = SmallLLMExecutor(config)
result = executor.execute(task, project_path)
Features:
- 4-layer MCP self-description context (L0 instructions, L1 schema, L2 resources, L3 prompts)
- Simple
TOOL_CALL:output format optimized for small model parsing - Ollama (local) and Anthropic Haiku (API) providers
- Uses MCPSessionPool for persistent MCP connections (v0.3.28)
- Graceful degradation when MCP context unavailable
CLI Usage:
python -m spine.orchestrator run --project /path --executor small-llm
TaskTypeRouter (v0.3.26+)
Dynamic routing executor that classifies tasks and delegates to the best executor per type.
from spine.orchestrator.task_router import TaskTypeRouter, TaskType, RoutingRule
from spine.orchestrator.executors import SubagentExecutor, ClaudeCodeExecutor
rules = [
RoutingRule(TaskType.CODE, SubagentExecutor(code_config)),
RoutingRule(TaskType.RESEARCH, ClaudeCodeExecutor(research_config)),
]
router = TaskTypeRouter(rules=rules, fallback=SubagentExecutor(default_config))
result = router.execute(task, project_path, role="implementer")
Features:
- Heuristic task classification (no LLM call needed — fast and free)
- 6 task types: CODE, RESEARCH, CONTENT, REVIEW, ANALYSIS, GENERAL
- Implements Executor interface — transparent to AgenticLoop
- Config-driven routing rules
CLI Usage:
python -m spine.orchestrator run --project /path \
--executor router \
--route CODE:subagent \
--route RESEARCH:claude-code
PlaceholderExecutor
A no-op executor for testing and development. Returns a configurable static result without making any LLM calls or executing any logic.
from spine.orchestrator.executors.base import PlaceholderExecutor
executor = PlaceholderExecutor(
default_output="Placeholder result",
default_success=True,
)
result = executor.execute(task, project_path, role="implementer")
Features:
- Zero external dependencies — no LLM calls, no MCP connections
- Configurable success/failure responses
- Useful for testing orchestration flows, dry runs, and pipeline validation
ContentPipelineExecutor
Handles video and content generation workflows with multi-step processing.
from spine.orchestrator.executors.content_pipeline import (
ContentPipelineExecutor,
ContentPipelineConfig,
)
config = ContentPipelineConfig(
pipeline_stages=["research", "draft", "review", "publish"],
parallel_stages=True,
)
executor = ContentPipelineExecutor(config)
result = executor.execute(task, project_path, role="content-creator")
Features:
- Multi-stage content processing pipeline
- Parallel stage execution where dependencies allow
- Integrates with context stacks for stage-specific prompts
Executor Interface
All executors implement the base Executor interface:
from spine.orchestrator.executors import Executor, ExecutorResult
class Executor:
def execute(
self,
task: Task,
project_path: Path,
role: str | None = None,
) -> ExecutorResult:
"""Execute a task and return the result."""
...
ExecutorResult:
@dataclass
class ExecutorResult:
success: bool
changes: list[str] # List of changes made
output: str # Raw executor output
error: str | None = None # Error message if failed
tokens_used: int = 0 # Token consumption
duration_ms: int = 0 # Execution time
Context Stack Integration
Both executors support context stacks from YAML scenario files (v0.3.20):
# Use a specific scenario
python -m spine.orchestrator run --project /path \
--executor subagent \
--scenario scenarios/research.yaml
# Role-specific scenarios
python -m spine.orchestrator run --project /path \
--executor claude-code \
--role-scenario "architect:scenarios/architecture.yaml" \
--role-scenario "reviewer:scenarios/code-review.yaml"
# Disable context stacks (use legacy prompts)
python -m spine.orchestrator run --project /path \
--executor subagent \
--no-context-stacks
See Context Stack Integration for details on scenario files.
Choosing an Executor
| Executor | Best For | Trade-offs |
|---|---|---|
| SubagentExecutor | Programmatic control, persona-based tasks | Requires agent definitions |
| ClaudeCodeExecutor | Full CLI capabilities, file operations | Higher overhead, subprocess management |
| MCPOrchestratorExecutor | Intelligent tool routing, multi-provider | Requires external service, adds latency |
| SmallLLMExecutor | Cost-optimized tasks, edge deployment | Limited model capability, needs MCP context |
| ContentPipelineExecutor | Multi-stage content generation | Pipeline-specific, content workflows |
| PlaceholderExecutor | Testing, dry runs, pipeline validation | No-op, returns static results |
| TaskTypeRouter | Mixed workloads, automatic delegation | Adds classification step, needs routing rules |
Custom Executors
Create custom executors by implementing the base interface:
from spine.orchestrator.executors import Executor, ExecutorResult
from spine.orchestrator.task_queue import Task
from pathlib import Path
class MyExecutor(Executor):
def execute(
self,
task: Task,
project_path: Path,
role: str | None = None,
) -> ExecutorResult:
# Your execution logic here
return ExecutorResult(
success=True,
changes=["[OK] did_something"],
output="Task completed",
)