AI Assistant Integration

Multi-provider AI routing system that bridges external AI assistants (OpenAI, Gemini, Claude) to the MCP Orchestrator with health monitoring and feedback collection.

189 Tests Passing 5 Modules Multi-Provider

5-Module Architecture

p02-1 OpenAI Adapter

GPT-4 integration with streaming support, function calling, and error handling.

Provider: OpenAI

p02-2 Gemini Adapter

Google Gemini integration with multimodal support and safety settings.

20 tests | Provider: Google

p02-3 Assistant Bridge

Main integration bridge with intent mapping, tool conversion, and feedback collection.

86 tests | Core Bridge

p02-4 Provider Monitor

Health checking, latency tracking, and availability monitoring for all providers.

48 tests | Health Monitor

p02-5 Assistant Config

Pydantic schemas for provider configuration, rate limits, and API keys.

35 tests | Config Schemas

Request Flow

User Request
Intent Mapper
Provider Monitor
OpenAI / Gemini / Claude
Feedback Collector

Requests are mapped to capabilities, routed to the healthiest provider, and feedback is collected for learning.

Key Features

Multi-Provider Support

OpenAI GPT-4, Google Gemini, and Anthropic Claude with unified interface.

Intent Mapping

Map natural language intents to capabilities using pattern matching.

Tool Conversion

Convert tool schemas between providers (OpenAI, Anthropic, Gemini formats).

Health Monitoring

Track latency, success rates, and availability for each provider.

Feedback Collection

Collect success/failure feedback to improve future routing decisions.

Rate Limiting

Built-in rate limit tracking and automatic provider fallback.

Quick Example

from assistant_bridge import AssistantBridge
from provider_monitor import ProviderMonitor

# Initialize bridge with health monitoring
monitor = ProviderMonitor()
bridge = AssistantBridge(monitor=monitor)

# Configure providers
bridge.add_provider("openai", api_key="sk-...")
bridge.add_provider("gemini", api_key="AI...")

# Execute task - automatically routes to healthiest provider
response = await bridge.execute(
    task="Generate a sorting algorithm",
    capabilities=["code_generation"],
)

print(response.provider)  # openai or gemini
print(response.latency)   # 1234ms